00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 205 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3706 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.011 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.012 The recommended git tool is: git 00:00:00.012 using credential 00000000-0000-0000-0000-000000000002 00:00:00.014 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.026 Fetching changes from the remote Git repository 00:00:00.028 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.040 Using shallow fetch with depth 1 00:00:00.040 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.040 > git --version # timeout=10 00:00:00.054 > git --version # 'git version 2.39.2' 00:00:00.054 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.069 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.069 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.677 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.687 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.698 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.698 > git config core.sparsecheckout # timeout=10 00:00:02.707 > git read-tree -mu HEAD # timeout=10 00:00:02.722 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.741 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.741 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.953 [Pipeline] Start of Pipeline 00:00:02.966 [Pipeline] library 00:00:02.967 Loading library shm_lib@master 00:00:02.967 Library shm_lib@master is cached. Copying from home. 00:00:02.982 [Pipeline] node 00:00:02.994 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.995 [Pipeline] { 00:00:03.006 [Pipeline] catchError 00:00:03.007 [Pipeline] { 00:00:03.018 [Pipeline] wrap 00:00:03.026 [Pipeline] { 00:00:03.034 [Pipeline] stage 00:00:03.035 [Pipeline] { (Prologue) 00:00:03.048 [Pipeline] echo 00:00:03.049 Node: VM-host-WFP7 00:00:03.053 [Pipeline] cleanWs 00:00:03.063 [WS-CLEANUP] Deleting project workspace... 00:00:03.063 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.068 [WS-CLEANUP] done 00:00:03.284 [Pipeline] setCustomBuildProperty 00:00:03.364 [Pipeline] httpRequest 00:00:03.959 [Pipeline] echo 00:00:03.961 Sorcerer 10.211.164.20 is alive 00:00:03.970 [Pipeline] retry 00:00:03.972 [Pipeline] { 00:00:03.985 [Pipeline] httpRequest 00:00:03.990 HttpMethod: GET 00:00:03.991 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.991 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.992 Response Code: HTTP/1.1 200 OK 00:00:03.993 Success: Status code 200 is in the accepted range: 200,404 00:00:03.993 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.269 [Pipeline] } 00:00:04.285 [Pipeline] // retry 00:00:04.292 [Pipeline] sh 00:00:04.576 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.590 [Pipeline] httpRequest 00:00:05.136 [Pipeline] echo 00:00:05.137 Sorcerer 10.211.164.20 is alive 00:00:05.146 [Pipeline] retry 00:00:05.147 [Pipeline] { 00:00:05.161 [Pipeline] httpRequest 00:00:05.167 HttpMethod: GET 00:00:05.167 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:05.167 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:05.169 Response Code: HTTP/1.1 200 OK 00:00:05.169 Success: Status code 200 is in the accepted range: 200,404 00:00:05.170 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:26.893 [Pipeline] } 00:00:26.910 [Pipeline] // retry 00:00:26.919 [Pipeline] sh 00:00:27.203 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:29.754 [Pipeline] sh 00:00:30.038 + git -C spdk log --oneline -n5 00:00:30.038 b18e1bd62 version: v24.09.1-pre 00:00:30.038 19524ad45 version: v24.09 00:00:30.038 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:30.038 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:30.038 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:30.055 [Pipeline] withCredentials 00:00:30.066 > git --version # timeout=10 00:00:30.076 > git --version # 'git version 2.39.2' 00:00:30.095 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:30.096 [Pipeline] { 00:00:30.104 [Pipeline] retry 00:00:30.105 [Pipeline] { 00:00:30.116 [Pipeline] sh 00:00:30.410 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:30.683 [Pipeline] } 00:00:30.694 [Pipeline] // retry 00:00:30.699 [Pipeline] } 00:00:30.713 [Pipeline] // withCredentials 00:00:30.722 [Pipeline] httpRequest 00:00:31.343 [Pipeline] echo 00:00:31.344 Sorcerer 10.211.164.20 is alive 00:00:31.352 [Pipeline] retry 00:00:31.354 [Pipeline] { 00:00:31.366 [Pipeline] httpRequest 00:00:31.371 HttpMethod: GET 00:00:31.372 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:31.372 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:31.388 Response Code: HTTP/1.1 200 OK 00:00:31.389 Success: Status code 200 is in the accepted range: 200,404 00:00:31.389 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:10.343 [Pipeline] } 00:01:10.361 [Pipeline] // retry 00:01:10.369 [Pipeline] sh 00:01:10.654 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:12.050 [Pipeline] sh 00:01:12.335 + git -C dpdk log --oneline -n5 00:01:12.335 eeb0605f11 version: 23.11.0 00:01:12.335 238778122a doc: update release notes for 23.11 00:01:12.335 46aa6b3cfc doc: fix description of RSS features 00:01:12.335 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:12.335 7e421ae345 devtools: support skipping forbid rule check 00:01:12.355 [Pipeline] writeFile 00:01:12.371 [Pipeline] sh 00:01:12.658 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:12.673 [Pipeline] sh 00:01:12.959 + cat autorun-spdk.conf 00:01:12.959 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.959 SPDK_RUN_ASAN=1 00:01:12.959 SPDK_RUN_UBSAN=1 00:01:12.959 SPDK_TEST_RAID=1 00:01:12.959 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:12.959 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:12.959 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.967 RUN_NIGHTLY=1 00:01:12.969 [Pipeline] } 00:01:12.984 [Pipeline] // stage 00:01:13.000 [Pipeline] stage 00:01:13.002 [Pipeline] { (Run VM) 00:01:13.016 [Pipeline] sh 00:01:13.300 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:13.301 + echo 'Start stage prepare_nvme.sh' 00:01:13.301 Start stage prepare_nvme.sh 00:01:13.301 + [[ -n 1 ]] 00:01:13.301 + disk_prefix=ex1 00:01:13.301 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:13.301 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:13.301 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:13.301 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.301 ++ SPDK_RUN_ASAN=1 00:01:13.301 ++ SPDK_RUN_UBSAN=1 00:01:13.301 ++ SPDK_TEST_RAID=1 00:01:13.301 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:13.301 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:13.301 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.301 ++ RUN_NIGHTLY=1 00:01:13.301 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:13.301 + nvme_files=() 00:01:13.301 + declare -A nvme_files 00:01:13.301 + backend_dir=/var/lib/libvirt/images/backends 00:01:13.301 + nvme_files['nvme.img']=5G 00:01:13.301 + nvme_files['nvme-cmb.img']=5G 00:01:13.301 + nvme_files['nvme-multi0.img']=4G 00:01:13.301 + nvme_files['nvme-multi1.img']=4G 00:01:13.301 + nvme_files['nvme-multi2.img']=4G 00:01:13.301 + nvme_files['nvme-openstack.img']=8G 00:01:13.301 + nvme_files['nvme-zns.img']=5G 00:01:13.301 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:13.301 + (( SPDK_TEST_FTL == 1 )) 00:01:13.301 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:13.301 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:13.301 + for nvme in "${!nvme_files[@]}" 00:01:13.301 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:13.301 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.301 + for nvme in "${!nvme_files[@]}" 00:01:13.301 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:13.301 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.301 + for nvme in "${!nvme_files[@]}" 00:01:13.301 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:13.301 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:13.301 + for nvme in "${!nvme_files[@]}" 00:01:13.301 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:13.301 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.301 + for nvme in "${!nvme_files[@]}" 00:01:13.301 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:13.301 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.301 + for nvme in "${!nvme_files[@]}" 00:01:13.301 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:13.301 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.565 + for nvme in "${!nvme_files[@]}" 00:01:13.565 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:13.565 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.565 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:13.565 + echo 'End stage prepare_nvme.sh' 00:01:13.565 End stage prepare_nvme.sh 00:01:13.579 [Pipeline] sh 00:01:13.865 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:13.865 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:13.865 00:01:13.865 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:13.865 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:13.865 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:13.865 HELP=0 00:01:13.865 DRY_RUN=0 00:01:13.865 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:13.865 NVME_DISKS_TYPE=nvme,nvme, 00:01:13.865 NVME_AUTO_CREATE=0 00:01:13.865 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:13.865 NVME_CMB=,, 00:01:13.865 NVME_PMR=,, 00:01:13.865 NVME_ZNS=,, 00:01:13.865 NVME_MS=,, 00:01:13.865 NVME_FDP=,, 00:01:13.865 SPDK_VAGRANT_DISTRO=fedora39 00:01:13.865 SPDK_VAGRANT_VMCPU=10 00:01:13.865 SPDK_VAGRANT_VMRAM=12288 00:01:13.865 SPDK_VAGRANT_PROVIDER=libvirt 00:01:13.866 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:13.866 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:13.866 SPDK_OPENSTACK_NETWORK=0 00:01:13.866 VAGRANT_PACKAGE_BOX=0 00:01:13.866 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:13.866 FORCE_DISTRO=true 00:01:13.866 VAGRANT_BOX_VERSION= 00:01:13.866 EXTRA_VAGRANTFILES= 00:01:13.866 NIC_MODEL=virtio 00:01:13.866 00:01:13.866 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:13.866 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:15.776 Bringing machine 'default' up with 'libvirt' provider... 00:01:16.351 ==> default: Creating image (snapshot of base box volume). 00:01:16.351 ==> default: Creating domain with the following settings... 00:01:16.351 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733588834_e98e3e4a5cedefb8b19d 00:01:16.351 ==> default: -- Domain type: kvm 00:01:16.351 ==> default: -- Cpus: 10 00:01:16.351 ==> default: -- Feature: acpi 00:01:16.351 ==> default: -- Feature: apic 00:01:16.351 ==> default: -- Feature: pae 00:01:16.351 ==> default: -- Memory: 12288M 00:01:16.351 ==> default: -- Memory Backing: hugepages: 00:01:16.351 ==> default: -- Management MAC: 00:01:16.351 ==> default: -- Loader: 00:01:16.351 ==> default: -- Nvram: 00:01:16.351 ==> default: -- Base box: spdk/fedora39 00:01:16.351 ==> default: -- Storage pool: default 00:01:16.351 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733588834_e98e3e4a5cedefb8b19d.img (20G) 00:01:16.351 ==> default: -- Volume Cache: default 00:01:16.351 ==> default: -- Kernel: 00:01:16.351 ==> default: -- Initrd: 00:01:16.351 ==> default: -- Graphics Type: vnc 00:01:16.351 ==> default: -- Graphics Port: -1 00:01:16.351 ==> default: -- Graphics IP: 127.0.0.1 00:01:16.351 ==> default: -- Graphics Password: Not defined 00:01:16.351 ==> default: -- Video Type: cirrus 00:01:16.351 ==> default: -- Video VRAM: 9216 00:01:16.351 ==> default: -- Sound Type: 00:01:16.351 ==> default: -- Keymap: en-us 00:01:16.351 ==> default: -- TPM Path: 00:01:16.351 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:16.351 ==> default: -- Command line args: 00:01:16.351 ==> default: -> value=-device, 00:01:16.351 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:16.351 ==> default: -> value=-drive, 00:01:16.351 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:16.351 ==> default: -> value=-device, 00:01:16.351 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.351 ==> default: -> value=-device, 00:01:16.351 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:16.351 ==> default: -> value=-drive, 00:01:16.351 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:16.351 ==> default: -> value=-device, 00:01:16.351 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.351 ==> default: -> value=-drive, 00:01:16.351 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:16.351 ==> default: -> value=-device, 00:01:16.351 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.351 ==> default: -> value=-drive, 00:01:16.351 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:16.351 ==> default: -> value=-device, 00:01:16.351 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.612 ==> default: Creating shared folders metadata... 00:01:16.612 ==> default: Starting domain. 00:01:17.991 ==> default: Waiting for domain to get an IP address... 00:01:36.089 ==> default: Waiting for SSH to become available... 00:01:36.089 ==> default: Configuring and enabling network interfaces... 00:01:41.365 default: SSH address: 192.168.121.149:22 00:01:41.365 default: SSH username: vagrant 00:01:41.365 default: SSH auth method: private key 00:01:44.694 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:51.272 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:57.947 ==> default: Mounting SSHFS shared folder... 00:01:59.856 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:59.856 ==> default: Checking Mount.. 00:02:01.238 ==> default: Folder Successfully Mounted! 00:02:01.238 ==> default: Running provisioner: file... 00:02:02.621 default: ~/.gitconfig => .gitconfig 00:02:02.881 00:02:02.881 SUCCESS! 00:02:02.881 00:02:02.881 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:02.881 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:02.881 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:02.881 00:02:02.891 [Pipeline] } 00:02:02.909 [Pipeline] // stage 00:02:02.920 [Pipeline] dir 00:02:02.920 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:02.922 [Pipeline] { 00:02:02.938 [Pipeline] catchError 00:02:02.940 [Pipeline] { 00:02:02.955 [Pipeline] sh 00:02:03.237 + vagrant ssh-config --host vagrant 00:02:03.237 + sed -ne /^Host/,$p 00:02:03.237 + tee ssh_conf 00:02:05.778 Host vagrant 00:02:05.778 HostName 192.168.121.149 00:02:05.778 User vagrant 00:02:05.778 Port 22 00:02:05.778 UserKnownHostsFile /dev/null 00:02:05.778 StrictHostKeyChecking no 00:02:05.778 PasswordAuthentication no 00:02:05.778 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:05.778 IdentitiesOnly yes 00:02:05.778 LogLevel FATAL 00:02:05.778 ForwardAgent yes 00:02:05.778 ForwardX11 yes 00:02:05.778 00:02:05.792 [Pipeline] withEnv 00:02:05.794 [Pipeline] { 00:02:05.806 [Pipeline] sh 00:02:06.132 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:06.132 source /etc/os-release 00:02:06.132 [[ -e /image.version ]] && img=$(< /image.version) 00:02:06.132 # Minimal, systemd-like check. 00:02:06.132 if [[ -e /.dockerenv ]]; then 00:02:06.132 # Clear garbage from the node's name: 00:02:06.132 # agt-er_autotest_547-896 -> autotest_547-896 00:02:06.132 # $HOSTNAME is the actual container id 00:02:06.132 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:06.132 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:06.132 # We can assume this is a mount from a host where container is running, 00:02:06.132 # so fetch its hostname to easily identify the target swarm worker. 00:02:06.132 container="$(< /etc/hostname) ($agent)" 00:02:06.132 else 00:02:06.132 # Fallback 00:02:06.132 container=$agent 00:02:06.132 fi 00:02:06.132 fi 00:02:06.132 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:06.132 00:02:06.414 [Pipeline] } 00:02:06.442 [Pipeline] // withEnv 00:02:06.448 [Pipeline] setCustomBuildProperty 00:02:06.459 [Pipeline] stage 00:02:06.460 [Pipeline] { (Tests) 00:02:06.472 [Pipeline] sh 00:02:06.751 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:07.025 [Pipeline] sh 00:02:07.306 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:07.580 [Pipeline] timeout 00:02:07.581 Timeout set to expire in 1 hr 30 min 00:02:07.582 [Pipeline] { 00:02:07.596 [Pipeline] sh 00:02:07.878 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:08.447 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:08.459 [Pipeline] sh 00:02:08.741 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:09.016 [Pipeline] sh 00:02:09.299 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:09.579 [Pipeline] sh 00:02:09.865 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:10.124 ++ readlink -f spdk_repo 00:02:10.124 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:10.124 + [[ -n /home/vagrant/spdk_repo ]] 00:02:10.124 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:10.124 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:10.124 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:10.124 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:10.125 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:10.125 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:10.125 + cd /home/vagrant/spdk_repo 00:02:10.125 + source /etc/os-release 00:02:10.125 ++ NAME='Fedora Linux' 00:02:10.125 ++ VERSION='39 (Cloud Edition)' 00:02:10.125 ++ ID=fedora 00:02:10.125 ++ VERSION_ID=39 00:02:10.125 ++ VERSION_CODENAME= 00:02:10.125 ++ PLATFORM_ID=platform:f39 00:02:10.125 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:10.125 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:10.125 ++ LOGO=fedora-logo-icon 00:02:10.125 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:10.125 ++ HOME_URL=https://fedoraproject.org/ 00:02:10.125 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:10.125 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:10.125 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:10.125 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:10.125 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:10.125 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:10.125 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:10.125 ++ SUPPORT_END=2024-11-12 00:02:10.125 ++ VARIANT='Cloud Edition' 00:02:10.125 ++ VARIANT_ID=cloud 00:02:10.125 + uname -a 00:02:10.125 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:10.125 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:10.694 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:10.694 Hugepages 00:02:10.694 node hugesize free / total 00:02:10.694 node0 1048576kB 0 / 0 00:02:10.694 node0 2048kB 0 / 0 00:02:10.694 00:02:10.694 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.694 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:10.694 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:10.694 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:10.694 + rm -f /tmp/spdk-ld-path 00:02:10.694 + source autorun-spdk.conf 00:02:10.694 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.694 ++ SPDK_RUN_ASAN=1 00:02:10.694 ++ SPDK_RUN_UBSAN=1 00:02:10.694 ++ SPDK_TEST_RAID=1 00:02:10.694 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:10.694 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.694 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.694 ++ RUN_NIGHTLY=1 00:02:10.694 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.694 + [[ -n '' ]] 00:02:10.694 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:10.694 + for M in /var/spdk/build-*-manifest.txt 00:02:10.694 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:10.694 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.694 + for M in /var/spdk/build-*-manifest.txt 00:02:10.694 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.694 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.694 + for M in /var/spdk/build-*-manifest.txt 00:02:10.694 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.694 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.694 ++ uname 00:02:10.694 + [[ Linux == \L\i\n\u\x ]] 00:02:10.694 + sudo dmesg -T 00:02:10.955 + sudo dmesg --clear 00:02:10.955 + dmesg_pid=6164 00:02:10.955 + sudo dmesg -Tw 00:02:10.955 + [[ Fedora Linux == FreeBSD ]] 00:02:10.955 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.955 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.955 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.955 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.955 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.955 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.955 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.955 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.955 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.955 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.955 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.955 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.955 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.955 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.955 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.955 Test configuration: 00:02:10.955 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.955 SPDK_RUN_ASAN=1 00:02:10.955 SPDK_RUN_UBSAN=1 00:02:10.955 SPDK_TEST_RAID=1 00:02:10.955 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:10.955 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.955 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.955 RUN_NIGHTLY=1 16:28:09 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:10.955 16:28:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:10.955 16:28:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:10.955 16:28:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.955 16:28:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.955 16:28:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.955 16:28:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.955 16:28:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.955 16:28:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.955 16:28:09 -- paths/export.sh@5 -- $ export PATH 00:02:10.955 16:28:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.955 16:28:09 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:10.955 16:28:09 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:10.955 16:28:09 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733588889.XXXXXX 00:02:10.955 16:28:09 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733588889.mIHf47 00:02:10.955 16:28:09 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:10.955 16:28:09 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:10.955 16:28:09 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:10.955 16:28:09 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:10.955 16:28:09 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:10.955 16:28:09 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.955 16:28:09 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:10.955 16:28:09 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:10.955 16:28:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.955 16:28:09 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:10.955 16:28:09 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:10.955 16:28:09 -- pm/common@17 -- $ local monitor 00:02:10.955 16:28:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.955 16:28:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.955 16:28:09 -- pm/common@25 -- $ sleep 1 00:02:10.955 16:28:09 -- pm/common@21 -- $ date +%s 00:02:10.955 16:28:09 -- pm/common@21 -- $ date +%s 00:02:10.955 16:28:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733588889 00:02:10.955 16:28:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733588889 00:02:11.215 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733588889_collect-vmstat.pm.log 00:02:11.215 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733588889_collect-cpu-load.pm.log 00:02:12.154 16:28:10 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:12.154 16:28:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:12.154 16:28:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:12.154 16:28:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:12.154 16:28:10 -- spdk/autobuild.sh@16 -- $ date -u 00:02:12.155 Sat Dec 7 04:28:10 PM UTC 2024 00:02:12.155 16:28:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:12.155 v24.09-1-gb18e1bd62 00:02:12.155 16:28:10 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:12.155 16:28:10 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:12.155 16:28:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:12.155 16:28:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:12.155 16:28:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.155 ************************************ 00:02:12.155 START TEST asan 00:02:12.155 ************************************ 00:02:12.155 using asan 00:02:12.155 16:28:10 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:12.155 00:02:12.155 real 0m0.000s 00:02:12.155 user 0m0.000s 00:02:12.155 sys 0m0.000s 00:02:12.155 16:28:10 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:12.155 16:28:10 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:12.155 ************************************ 00:02:12.155 END TEST asan 00:02:12.155 ************************************ 00:02:12.155 16:28:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:12.155 16:28:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:12.155 16:28:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:12.155 16:28:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:12.155 16:28:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.155 ************************************ 00:02:12.155 START TEST ubsan 00:02:12.155 ************************************ 00:02:12.155 using ubsan 00:02:12.155 16:28:10 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:12.155 00:02:12.155 real 0m0.001s 00:02:12.155 user 0m0.000s 00:02:12.155 sys 0m0.000s 00:02:12.155 16:28:10 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:12.155 16:28:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:12.155 ************************************ 00:02:12.155 END TEST ubsan 00:02:12.155 ************************************ 00:02:12.155 16:28:10 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:12.155 16:28:10 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:12.155 16:28:10 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:12.155 16:28:10 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:12.155 16:28:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:12.155 16:28:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.155 ************************************ 00:02:12.155 START TEST build_native_dpdk 00:02:12.155 ************************************ 00:02:12.155 16:28:10 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:12.155 16:28:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:12.155 eeb0605f11 version: 23.11.0 00:02:12.155 238778122a doc: update release notes for 23.11 00:02:12.155 46aa6b3cfc doc: fix description of RSS features 00:02:12.155 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:12.155 7e421ae345 devtools: support skipping forbid rule check 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:12.155 16:28:11 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:12.155 16:28:11 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:12.415 patching file config/rte_config.h 00:02:12.415 Hunk #1 succeeded at 60 (offset 1 line). 00:02:12.415 16:28:11 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:12.415 16:28:11 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:12.415 patching file lib/pcapng/rte_pcapng.c 00:02:12.415 16:28:11 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:12.415 16:28:11 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:12.415 16:28:11 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:12.415 16:28:11 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:12.415 16:28:11 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:12.415 16:28:11 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:12.415 16:28:11 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:17.728 The Meson build system 00:02:17.728 Version: 1.5.0 00:02:17.728 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:17.728 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:17.728 Build type: native build 00:02:17.728 Program cat found: YES (/usr/bin/cat) 00:02:17.728 Project name: DPDK 00:02:17.728 Project version: 23.11.0 00:02:17.728 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:17.728 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:17.728 Host machine cpu family: x86_64 00:02:17.728 Host machine cpu: x86_64 00:02:17.728 Message: ## Building in Developer Mode ## 00:02:17.728 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:17.728 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:17.728 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:17.728 Program python3 found: YES (/usr/bin/python3) 00:02:17.728 Program cat found: YES (/usr/bin/cat) 00:02:17.728 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:17.729 Compiler for C supports arguments -march=native: YES 00:02:17.729 Checking for size of "void *" : 8 00:02:17.729 Checking for size of "void *" : 8 (cached) 00:02:17.729 Library m found: YES 00:02:17.729 Library numa found: YES 00:02:17.729 Has header "numaif.h" : YES 00:02:17.729 Library fdt found: NO 00:02:17.729 Library execinfo found: NO 00:02:17.729 Has header "execinfo.h" : YES 00:02:17.729 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:17.729 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:17.729 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:17.729 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:17.729 Run-time dependency openssl found: YES 3.1.1 00:02:17.729 Run-time dependency libpcap found: YES 1.10.4 00:02:17.729 Has header "pcap.h" with dependency libpcap: YES 00:02:17.729 Compiler for C supports arguments -Wcast-qual: YES 00:02:17.729 Compiler for C supports arguments -Wdeprecated: YES 00:02:17.729 Compiler for C supports arguments -Wformat: YES 00:02:17.729 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:17.729 Compiler for C supports arguments -Wformat-security: NO 00:02:17.729 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:17.729 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:17.729 Compiler for C supports arguments -Wnested-externs: YES 00:02:17.729 Compiler for C supports arguments -Wold-style-definition: YES 00:02:17.729 Compiler for C supports arguments -Wpointer-arith: YES 00:02:17.729 Compiler for C supports arguments -Wsign-compare: YES 00:02:17.729 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:17.729 Compiler for C supports arguments -Wundef: YES 00:02:17.729 Compiler for C supports arguments -Wwrite-strings: YES 00:02:17.729 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:17.729 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:17.729 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:17.729 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:17.729 Program objdump found: YES (/usr/bin/objdump) 00:02:17.729 Compiler for C supports arguments -mavx512f: YES 00:02:17.729 Checking if "AVX512 checking" compiles: YES 00:02:17.729 Fetching value of define "__SSE4_2__" : 1 00:02:17.729 Fetching value of define "__AES__" : 1 00:02:17.729 Fetching value of define "__AVX__" : 1 00:02:17.729 Fetching value of define "__AVX2__" : 1 00:02:17.729 Fetching value of define "__AVX512BW__" : 1 00:02:17.729 Fetching value of define "__AVX512CD__" : 1 00:02:17.729 Fetching value of define "__AVX512DQ__" : 1 00:02:17.729 Fetching value of define "__AVX512F__" : 1 00:02:17.729 Fetching value of define "__AVX512VL__" : 1 00:02:17.729 Fetching value of define "__PCLMUL__" : 1 00:02:17.729 Fetching value of define "__RDRND__" : 1 00:02:17.729 Fetching value of define "__RDSEED__" : 1 00:02:17.729 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:17.729 Fetching value of define "__znver1__" : (undefined) 00:02:17.729 Fetching value of define "__znver2__" : (undefined) 00:02:17.729 Fetching value of define "__znver3__" : (undefined) 00:02:17.729 Fetching value of define "__znver4__" : (undefined) 00:02:17.729 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:17.729 Message: lib/log: Defining dependency "log" 00:02:17.729 Message: lib/kvargs: Defining dependency "kvargs" 00:02:17.729 Message: lib/telemetry: Defining dependency "telemetry" 00:02:17.729 Checking for function "getentropy" : NO 00:02:17.729 Message: lib/eal: Defining dependency "eal" 00:02:17.729 Message: lib/ring: Defining dependency "ring" 00:02:17.729 Message: lib/rcu: Defining dependency "rcu" 00:02:17.729 Message: lib/mempool: Defining dependency "mempool" 00:02:17.729 Message: lib/mbuf: Defining dependency "mbuf" 00:02:17.729 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:17.729 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.729 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.729 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.729 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:17.729 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:17.729 Compiler for C supports arguments -mpclmul: YES 00:02:17.729 Compiler for C supports arguments -maes: YES 00:02:17.729 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.729 Compiler for C supports arguments -mavx512bw: YES 00:02:17.729 Compiler for C supports arguments -mavx512dq: YES 00:02:17.729 Compiler for C supports arguments -mavx512vl: YES 00:02:17.729 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:17.729 Compiler for C supports arguments -mavx2: YES 00:02:17.729 Compiler for C supports arguments -mavx: YES 00:02:17.729 Message: lib/net: Defining dependency "net" 00:02:17.729 Message: lib/meter: Defining dependency "meter" 00:02:17.729 Message: lib/ethdev: Defining dependency "ethdev" 00:02:17.729 Message: lib/pci: Defining dependency "pci" 00:02:17.729 Message: lib/cmdline: Defining dependency "cmdline" 00:02:17.729 Message: lib/metrics: Defining dependency "metrics" 00:02:17.729 Message: lib/hash: Defining dependency "hash" 00:02:17.729 Message: lib/timer: Defining dependency "timer" 00:02:17.729 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.729 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:17.729 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:17.729 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.729 Message: lib/acl: Defining dependency "acl" 00:02:17.729 Message: lib/bbdev: Defining dependency "bbdev" 00:02:17.729 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:17.729 Run-time dependency libelf found: YES 0.191 00:02:17.729 Message: lib/bpf: Defining dependency "bpf" 00:02:17.729 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:17.729 Message: lib/compressdev: Defining dependency "compressdev" 00:02:17.729 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:17.729 Message: lib/distributor: Defining dependency "distributor" 00:02:17.729 Message: lib/dmadev: Defining dependency "dmadev" 00:02:17.729 Message: lib/efd: Defining dependency "efd" 00:02:17.729 Message: lib/eventdev: Defining dependency "eventdev" 00:02:17.729 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:17.729 Message: lib/gpudev: Defining dependency "gpudev" 00:02:17.729 Message: lib/gro: Defining dependency "gro" 00:02:17.729 Message: lib/gso: Defining dependency "gso" 00:02:17.729 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:17.729 Message: lib/jobstats: Defining dependency "jobstats" 00:02:17.729 Message: lib/latencystats: Defining dependency "latencystats" 00:02:17.729 Message: lib/lpm: Defining dependency "lpm" 00:02:17.729 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.729 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.729 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:17.729 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:17.729 Message: lib/member: Defining dependency "member" 00:02:17.729 Message: lib/pcapng: Defining dependency "pcapng" 00:02:17.729 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:17.729 Message: lib/power: Defining dependency "power" 00:02:17.729 Message: lib/rawdev: Defining dependency "rawdev" 00:02:17.729 Message: lib/regexdev: Defining dependency "regexdev" 00:02:17.729 Message: lib/mldev: Defining dependency "mldev" 00:02:17.729 Message: lib/rib: Defining dependency "rib" 00:02:17.729 Message: lib/reorder: Defining dependency "reorder" 00:02:17.729 Message: lib/sched: Defining dependency "sched" 00:02:17.729 Message: lib/security: Defining dependency "security" 00:02:17.729 Message: lib/stack: Defining dependency "stack" 00:02:17.729 Has header "linux/userfaultfd.h" : YES 00:02:17.729 Has header "linux/vduse.h" : YES 00:02:17.729 Message: lib/vhost: Defining dependency "vhost" 00:02:17.729 Message: lib/ipsec: Defining dependency "ipsec" 00:02:17.729 Message: lib/pdcp: Defining dependency "pdcp" 00:02:17.729 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.729 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.729 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.729 Message: lib/fib: Defining dependency "fib" 00:02:17.729 Message: lib/port: Defining dependency "port" 00:02:17.729 Message: lib/pdump: Defining dependency "pdump" 00:02:17.729 Message: lib/table: Defining dependency "table" 00:02:17.729 Message: lib/pipeline: Defining dependency "pipeline" 00:02:17.729 Message: lib/graph: Defining dependency "graph" 00:02:17.729 Message: lib/node: Defining dependency "node" 00:02:17.729 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:17.729 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:17.729 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:19.642 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:19.642 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:19.642 Compiler for C supports arguments -Wno-unused-value: YES 00:02:19.642 Compiler for C supports arguments -Wno-format: YES 00:02:19.642 Compiler for C supports arguments -Wno-format-security: YES 00:02:19.642 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:19.642 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:19.642 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:19.642 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:19.642 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:19.642 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:19.642 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.642 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:19.642 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:19.642 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:19.642 Has header "sys/epoll.h" : YES 00:02:19.642 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:19.642 Configuring doxy-api-html.conf using configuration 00:02:19.642 Configuring doxy-api-man.conf using configuration 00:02:19.642 Program mandb found: YES (/usr/bin/mandb) 00:02:19.642 Program sphinx-build found: NO 00:02:19.642 Configuring rte_build_config.h using configuration 00:02:19.642 Message: 00:02:19.642 ================= 00:02:19.642 Applications Enabled 00:02:19.642 ================= 00:02:19.642 00:02:19.642 apps: 00:02:19.642 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:19.642 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:19.642 test-pmd, test-regex, test-sad, test-security-perf, 00:02:19.642 00:02:19.642 Message: 00:02:19.642 ================= 00:02:19.642 Libraries Enabled 00:02:19.642 ================= 00:02:19.642 00:02:19.642 libs: 00:02:19.642 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:19.642 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:19.642 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:19.642 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:19.642 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:19.642 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:19.642 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:19.642 00:02:19.642 00:02:19.642 Message: 00:02:19.642 =============== 00:02:19.642 Drivers Enabled 00:02:19.642 =============== 00:02:19.642 00:02:19.642 common: 00:02:19.642 00:02:19.642 bus: 00:02:19.642 pci, vdev, 00:02:19.642 mempool: 00:02:19.642 ring, 00:02:19.642 dma: 00:02:19.642 00:02:19.642 net: 00:02:19.642 i40e, 00:02:19.642 raw: 00:02:19.642 00:02:19.642 crypto: 00:02:19.642 00:02:19.642 compress: 00:02:19.642 00:02:19.642 regex: 00:02:19.642 00:02:19.642 ml: 00:02:19.642 00:02:19.642 vdpa: 00:02:19.642 00:02:19.642 event: 00:02:19.642 00:02:19.642 baseband: 00:02:19.642 00:02:19.642 gpu: 00:02:19.642 00:02:19.642 00:02:19.642 Message: 00:02:19.642 ================= 00:02:19.642 Content Skipped 00:02:19.642 ================= 00:02:19.642 00:02:19.642 apps: 00:02:19.642 00:02:19.642 libs: 00:02:19.642 00:02:19.642 drivers: 00:02:19.642 common/cpt: not in enabled drivers build config 00:02:19.642 common/dpaax: not in enabled drivers build config 00:02:19.642 common/iavf: not in enabled drivers build config 00:02:19.642 common/idpf: not in enabled drivers build config 00:02:19.642 common/mvep: not in enabled drivers build config 00:02:19.642 common/octeontx: not in enabled drivers build config 00:02:19.642 bus/auxiliary: not in enabled drivers build config 00:02:19.642 bus/cdx: not in enabled drivers build config 00:02:19.642 bus/dpaa: not in enabled drivers build config 00:02:19.642 bus/fslmc: not in enabled drivers build config 00:02:19.642 bus/ifpga: not in enabled drivers build config 00:02:19.642 bus/platform: not in enabled drivers build config 00:02:19.642 bus/vmbus: not in enabled drivers build config 00:02:19.642 common/cnxk: not in enabled drivers build config 00:02:19.642 common/mlx5: not in enabled drivers build config 00:02:19.642 common/nfp: not in enabled drivers build config 00:02:19.642 common/qat: not in enabled drivers build config 00:02:19.642 common/sfc_efx: not in enabled drivers build config 00:02:19.642 mempool/bucket: not in enabled drivers build config 00:02:19.642 mempool/cnxk: not in enabled drivers build config 00:02:19.642 mempool/dpaa: not in enabled drivers build config 00:02:19.642 mempool/dpaa2: not in enabled drivers build config 00:02:19.642 mempool/octeontx: not in enabled drivers build config 00:02:19.642 mempool/stack: not in enabled drivers build config 00:02:19.642 dma/cnxk: not in enabled drivers build config 00:02:19.642 dma/dpaa: not in enabled drivers build config 00:02:19.642 dma/dpaa2: not in enabled drivers build config 00:02:19.642 dma/hisilicon: not in enabled drivers build config 00:02:19.642 dma/idxd: not in enabled drivers build config 00:02:19.642 dma/ioat: not in enabled drivers build config 00:02:19.642 dma/skeleton: not in enabled drivers build config 00:02:19.642 net/af_packet: not in enabled drivers build config 00:02:19.642 net/af_xdp: not in enabled drivers build config 00:02:19.642 net/ark: not in enabled drivers build config 00:02:19.642 net/atlantic: not in enabled drivers build config 00:02:19.642 net/avp: not in enabled drivers build config 00:02:19.642 net/axgbe: not in enabled drivers build config 00:02:19.642 net/bnx2x: not in enabled drivers build config 00:02:19.642 net/bnxt: not in enabled drivers build config 00:02:19.642 net/bonding: not in enabled drivers build config 00:02:19.642 net/cnxk: not in enabled drivers build config 00:02:19.642 net/cpfl: not in enabled drivers build config 00:02:19.642 net/cxgbe: not in enabled drivers build config 00:02:19.642 net/dpaa: not in enabled drivers build config 00:02:19.642 net/dpaa2: not in enabled drivers build config 00:02:19.642 net/e1000: not in enabled drivers build config 00:02:19.642 net/ena: not in enabled drivers build config 00:02:19.642 net/enetc: not in enabled drivers build config 00:02:19.642 net/enetfec: not in enabled drivers build config 00:02:19.642 net/enic: not in enabled drivers build config 00:02:19.642 net/failsafe: not in enabled drivers build config 00:02:19.642 net/fm10k: not in enabled drivers build config 00:02:19.642 net/gve: not in enabled drivers build config 00:02:19.642 net/hinic: not in enabled drivers build config 00:02:19.642 net/hns3: not in enabled drivers build config 00:02:19.642 net/iavf: not in enabled drivers build config 00:02:19.642 net/ice: not in enabled drivers build config 00:02:19.642 net/idpf: not in enabled drivers build config 00:02:19.642 net/igc: not in enabled drivers build config 00:02:19.642 net/ionic: not in enabled drivers build config 00:02:19.642 net/ipn3ke: not in enabled drivers build config 00:02:19.642 net/ixgbe: not in enabled drivers build config 00:02:19.642 net/mana: not in enabled drivers build config 00:02:19.642 net/memif: not in enabled drivers build config 00:02:19.642 net/mlx4: not in enabled drivers build config 00:02:19.642 net/mlx5: not in enabled drivers build config 00:02:19.642 net/mvneta: not in enabled drivers build config 00:02:19.642 net/mvpp2: not in enabled drivers build config 00:02:19.642 net/netvsc: not in enabled drivers build config 00:02:19.642 net/nfb: not in enabled drivers build config 00:02:19.642 net/nfp: not in enabled drivers build config 00:02:19.642 net/ngbe: not in enabled drivers build config 00:02:19.642 net/null: not in enabled drivers build config 00:02:19.642 net/octeontx: not in enabled drivers build config 00:02:19.642 net/octeon_ep: not in enabled drivers build config 00:02:19.642 net/pcap: not in enabled drivers build config 00:02:19.642 net/pfe: not in enabled drivers build config 00:02:19.642 net/qede: not in enabled drivers build config 00:02:19.643 net/ring: not in enabled drivers build config 00:02:19.643 net/sfc: not in enabled drivers build config 00:02:19.643 net/softnic: not in enabled drivers build config 00:02:19.643 net/tap: not in enabled drivers build config 00:02:19.643 net/thunderx: not in enabled drivers build config 00:02:19.643 net/txgbe: not in enabled drivers build config 00:02:19.643 net/vdev_netvsc: not in enabled drivers build config 00:02:19.643 net/vhost: not in enabled drivers build config 00:02:19.643 net/virtio: not in enabled drivers build config 00:02:19.643 net/vmxnet3: not in enabled drivers build config 00:02:19.643 raw/cnxk_bphy: not in enabled drivers build config 00:02:19.643 raw/cnxk_gpio: not in enabled drivers build config 00:02:19.643 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:19.643 raw/ifpga: not in enabled drivers build config 00:02:19.643 raw/ntb: not in enabled drivers build config 00:02:19.643 raw/skeleton: not in enabled drivers build config 00:02:19.643 crypto/armv8: not in enabled drivers build config 00:02:19.643 crypto/bcmfs: not in enabled drivers build config 00:02:19.643 crypto/caam_jr: not in enabled drivers build config 00:02:19.643 crypto/ccp: not in enabled drivers build config 00:02:19.643 crypto/cnxk: not in enabled drivers build config 00:02:19.643 crypto/dpaa_sec: not in enabled drivers build config 00:02:19.643 crypto/dpaa2_sec: not in enabled drivers build config 00:02:19.643 crypto/ipsec_mb: not in enabled drivers build config 00:02:19.643 crypto/mlx5: not in enabled drivers build config 00:02:19.643 crypto/mvsam: not in enabled drivers build config 00:02:19.643 crypto/nitrox: not in enabled drivers build config 00:02:19.643 crypto/null: not in enabled drivers build config 00:02:19.643 crypto/octeontx: not in enabled drivers build config 00:02:19.643 crypto/openssl: not in enabled drivers build config 00:02:19.643 crypto/scheduler: not in enabled drivers build config 00:02:19.643 crypto/uadk: not in enabled drivers build config 00:02:19.643 crypto/virtio: not in enabled drivers build config 00:02:19.643 compress/isal: not in enabled drivers build config 00:02:19.643 compress/mlx5: not in enabled drivers build config 00:02:19.643 compress/octeontx: not in enabled drivers build config 00:02:19.643 compress/zlib: not in enabled drivers build config 00:02:19.643 regex/mlx5: not in enabled drivers build config 00:02:19.643 regex/cn9k: not in enabled drivers build config 00:02:19.643 ml/cnxk: not in enabled drivers build config 00:02:19.643 vdpa/ifc: not in enabled drivers build config 00:02:19.643 vdpa/mlx5: not in enabled drivers build config 00:02:19.643 vdpa/nfp: not in enabled drivers build config 00:02:19.643 vdpa/sfc: not in enabled drivers build config 00:02:19.643 event/cnxk: not in enabled drivers build config 00:02:19.643 event/dlb2: not in enabled drivers build config 00:02:19.643 event/dpaa: not in enabled drivers build config 00:02:19.643 event/dpaa2: not in enabled drivers build config 00:02:19.643 event/dsw: not in enabled drivers build config 00:02:19.643 event/opdl: not in enabled drivers build config 00:02:19.643 event/skeleton: not in enabled drivers build config 00:02:19.643 event/sw: not in enabled drivers build config 00:02:19.643 event/octeontx: not in enabled drivers build config 00:02:19.643 baseband/acc: not in enabled drivers build config 00:02:19.643 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:19.643 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:19.643 baseband/la12xx: not in enabled drivers build config 00:02:19.643 baseband/null: not in enabled drivers build config 00:02:19.643 baseband/turbo_sw: not in enabled drivers build config 00:02:19.643 gpu/cuda: not in enabled drivers build config 00:02:19.643 00:02:19.643 00:02:19.643 Build targets in project: 217 00:02:19.643 00:02:19.643 DPDK 23.11.0 00:02:19.643 00:02:19.643 User defined options 00:02:19.643 libdir : lib 00:02:19.643 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:19.643 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:19.643 c_link_args : 00:02:19.643 enable_docs : false 00:02:19.643 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:19.643 enable_kmods : false 00:02:19.643 machine : native 00:02:19.643 tests : false 00:02:19.643 00:02:19.643 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.643 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:19.903 16:28:18 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:19.903 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:19.903 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:19.903 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:19.903 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:20.163 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:20.163 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:20.164 [6/707] Linking static target lib/librte_kvargs.a 00:02:20.164 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:20.164 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:20.164 [9/707] Linking static target lib/librte_log.a 00:02:20.164 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:20.164 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.424 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:20.424 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:20.424 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:20.424 [15/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:20.424 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:20.424 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.424 [18/707] Linking target lib/librte_log.so.24.0 00:02:20.684 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:20.684 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:20.684 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:20.684 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:20.684 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:20.944 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:20.944 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:20.944 [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:20.944 [27/707] Linking target lib/librte_kvargs.so.24.0 00:02:20.944 [28/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:20.944 [29/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:20.944 [30/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:20.944 [31/707] Linking static target lib/librte_telemetry.a 00:02:20.944 [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:20.944 [33/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:20.944 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:21.204 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:21.204 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:21.204 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:21.204 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:21.204 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:21.204 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:21.204 [41/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.204 [42/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:21.204 [43/707] Linking target lib/librte_telemetry.so.24.0 00:02:21.204 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:21.463 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:21.463 [46/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:21.463 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:21.463 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:21.463 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:21.463 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:21.723 [51/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:21.723 [52/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:21.723 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:21.723 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:21.723 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:21.723 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:21.723 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:21.723 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:21.981 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:21.981 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:21.981 [61/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:21.981 [62/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:21.981 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:21.981 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:21.981 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:21.981 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:21.981 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:21.981 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:22.240 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:22.240 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:22.240 [71/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:22.240 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:22.240 [73/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:22.240 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:22.240 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:22.240 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:22.240 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:22.500 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:22.500 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:22.500 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:22.500 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:22.759 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:22.759 [83/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:22.759 [84/707] Linking static target lib/librte_ring.a 00:02:22.759 [85/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:22.759 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:22.759 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:22.759 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:22.759 [89/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.759 [90/707] Linking static target lib/librte_eal.a 00:02:23.019 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:23.019 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:23.019 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:23.019 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:23.019 [95/707] Linking static target lib/librte_mempool.a 00:02:23.279 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:23.279 [97/707] Linking static target lib/librte_rcu.a 00:02:23.279 [98/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:23.279 [99/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:23.279 [100/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:23.279 [101/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:23.279 [102/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:23.279 [103/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:23.539 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:23.539 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.539 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.539 [107/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:23.539 [108/707] Linking static target lib/librte_mbuf.a 00:02:23.539 [109/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:23.539 [110/707] Linking static target lib/librte_net.a 00:02:23.799 [111/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:23.799 [112/707] Linking static target lib/librte_meter.a 00:02:23.799 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:23.799 [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:23.799 [115/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.799 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:23.799 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:23.799 [118/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.059 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.318 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.318 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.578 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.578 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.578 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.578 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:24.578 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:24.578 [127/707] Linking static target lib/librte_pci.a 00:02:24.578 [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:24.837 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:24.837 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:24.837 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:24.837 [132/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:24.837 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.837 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.837 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.837 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.837 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.837 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:24.837 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:25.097 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:25.097 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:25.097 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:25.097 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:25.097 [144/707] Linking static target lib/librte_cmdline.a 00:02:25.097 [145/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:25.357 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:25.357 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:25.357 [148/707] Linking static target lib/librte_metrics.a 00:02:25.357 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:25.357 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.616 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.616 [152/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:25.616 [153/707] Linking static target lib/librte_timer.a 00:02:25.876 [154/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:25.876 [155/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.876 [156/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.137 [157/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:26.137 [158/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:26.137 [159/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:26.137 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:26.710 [161/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:26.710 [162/707] Linking static target lib/librte_bitratestats.a 00:02:26.710 [163/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:26.710 [164/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:26.710 [165/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.710 [166/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:26.710 [167/707] Linking static target lib/librte_bbdev.a 00:02:26.970 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:27.229 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:27.229 [170/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.229 [171/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:27.489 [172/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:27.489 [173/707] Linking static target lib/acl/libavx2_tmp.a 00:02:27.489 [174/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:27.489 [175/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:27.489 [176/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:27.489 [177/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:27.489 [178/707] Linking static target lib/librte_hash.a 00:02:27.489 [179/707] Linking static target lib/librte_ethdev.a 00:02:27.750 [180/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.750 [181/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:27.750 [182/707] Linking static target lib/librte_cfgfile.a 00:02:27.750 [183/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:27.750 [184/707] Linking target lib/librte_eal.so.24.0 00:02:27.750 [185/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:27.750 [186/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:28.009 [187/707] Linking target lib/librte_ring.so.24.0 00:02:28.009 [188/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:28.009 [189/707] Linking target lib/librte_meter.so.24.0 00:02:28.009 [190/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.009 [191/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:28.009 [192/707] Linking target lib/librte_pci.so.24.0 00:02:28.009 [193/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.009 [194/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:28.009 [195/707] Linking target lib/librte_rcu.so.24.0 00:02:28.009 [196/707] Linking target lib/librte_mempool.so.24.0 00:02:28.009 [197/707] Linking target lib/librte_timer.so.24.0 00:02:28.009 [198/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:28.270 [199/707] Linking target lib/librte_cfgfile.so.24.0 00:02:28.270 [200/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:28.270 [201/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:28.270 [202/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:28.270 [203/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:28.270 [204/707] Linking target lib/librte_mbuf.so.24.0 00:02:28.270 [205/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:28.270 [206/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:28.270 [207/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:28.270 [208/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:28.270 [209/707] Linking target lib/librte_net.so.24.0 00:02:28.270 [210/707] Linking target lib/librte_bbdev.so.24.0 00:02:28.270 [211/707] Linking static target lib/librte_bpf.a 00:02:28.530 [212/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:28.530 [213/707] Linking target lib/librte_cmdline.so.24.0 00:02:28.530 [214/707] Linking target lib/librte_hash.so.24.0 00:02:28.530 [215/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:28.530 [216/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:28.530 [217/707] Linking static target lib/librte_acl.a 00:02:28.530 [218/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:28.530 [219/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.530 [220/707] Linking static target lib/librte_compressdev.a 00:02:28.530 [221/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:28.790 [222/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:28.790 [223/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:28.790 [224/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.790 [225/707] Linking target lib/librte_acl.so.24.0 00:02:29.050 [226/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:29.050 [227/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:29.050 [228/707] Linking static target lib/librte_distributor.a 00:02:29.050 [229/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:29.050 [230/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:29.050 [231/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.050 [232/707] Linking target lib/librte_compressdev.so.24.0 00:02:29.050 [233/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:29.050 [234/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.310 [235/707] Linking target lib/librte_distributor.so.24.0 00:02:29.310 [236/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:29.310 [237/707] Linking static target lib/librte_dmadev.a 00:02:29.570 [238/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:29.570 [239/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:29.570 [240/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.570 [241/707] Linking target lib/librte_dmadev.so.24.0 00:02:29.830 [242/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:29.830 [243/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:29.830 [244/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:29.830 [245/707] Linking static target lib/librte_efd.a 00:02:30.089 [246/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.089 [247/707] Linking target lib/librte_efd.so.24.0 00:02:30.089 [248/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:30.089 [249/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:30.089 [250/707] Linking static target lib/librte_cryptodev.a 00:02:30.349 [251/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:30.349 [252/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:30.349 [253/707] Linking static target lib/librte_dispatcher.a 00:02:30.609 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:30.609 [255/707] Linking static target lib/librte_gpudev.a 00:02:30.609 [256/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:30.609 [257/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:30.870 [258/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:30.870 [259/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:30.870 [260/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.130 [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:31.130 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:31.130 [263/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:31.130 [264/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:31.130 [265/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.130 [266/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.390 [267/707] Linking target lib/librte_gpudev.so.24.0 00:02:31.390 [268/707] Linking target lib/librte_cryptodev.so.24.0 00:02:31.390 [269/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:31.390 [270/707] Linking static target lib/librte_gro.a 00:02:31.390 [271/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:31.390 [272/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:31.390 [273/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:31.390 [274/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:31.390 [275/707] Linking static target lib/librte_eventdev.a 00:02:31.390 [276/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.650 [277/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:31.650 [278/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:31.650 [279/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:31.650 [280/707] Linking static target lib/librte_gso.a 00:02:31.650 [281/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.909 [282/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.909 [283/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:31.909 [284/707] Linking target lib/librte_ethdev.so.24.0 00:02:31.909 [285/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:31.909 [286/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:31.909 [287/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:31.910 [288/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:31.910 [289/707] Linking target lib/librte_metrics.so.24.0 00:02:31.910 [290/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:31.910 [291/707] Linking target lib/librte_gro.so.24.0 00:02:31.910 [292/707] Linking target lib/librte_bpf.so.24.0 00:02:31.910 [293/707] Linking static target lib/librte_jobstats.a 00:02:31.910 [294/707] Linking target lib/librte_gso.so.24.0 00:02:32.169 [295/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:32.169 [296/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:32.169 [297/707] Linking target lib/librte_bitratestats.so.24.0 00:02:32.169 [298/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:32.170 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:32.170 [300/707] Linking static target lib/librte_ip_frag.a 00:02:32.170 [301/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.170 [302/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:32.170 [303/707] Linking static target lib/librte_latencystats.a 00:02:32.439 [304/707] Linking target lib/librte_jobstats.so.24.0 00:02:32.439 [305/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:32.439 [306/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.439 [307/707] Linking target lib/librte_ip_frag.so.24.0 00:02:32.439 [308/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:32.439 [309/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:32.439 [310/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:32.439 [311/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.439 [312/707] Linking target lib/librte_latencystats.so.24.0 00:02:32.439 [313/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:32.710 [314/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:32.710 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:32.710 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:32.710 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:32.710 [318/707] Linking static target lib/librte_lpm.a 00:02:32.970 [319/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:32.970 [320/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:32.970 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:32.970 [322/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:32.970 [323/707] Linking static target lib/librte_pcapng.a 00:02:32.970 [324/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:33.230 [325/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:33.230 [326/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.230 [327/707] Linking target lib/librte_lpm.so.24.0 00:02:33.230 [328/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:33.230 [329/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.230 [330/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:33.230 [331/707] Linking target lib/librte_pcapng.so.24.0 00:02:33.230 [332/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:33.230 [333/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:33.490 [334/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.490 [335/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:33.490 [336/707] Linking target lib/librte_eventdev.so.24.0 00:02:33.490 [337/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:33.490 [338/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:33.490 [339/707] Linking target lib/librte_dispatcher.so.24.0 00:02:33.490 [340/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:33.490 [341/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:33.490 [342/707] Linking static target lib/librte_power.a 00:02:33.750 [343/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:33.750 [344/707] Linking static target lib/librte_rawdev.a 00:02:33.750 [345/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:33.750 [346/707] Linking static target lib/librte_regexdev.a 00:02:33.750 [347/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:33.750 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:33.750 [349/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:33.750 [350/707] Linking static target lib/librte_member.a 00:02:34.011 [351/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:34.011 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:34.011 [353/707] Linking static target lib/librte_mldev.a 00:02:34.011 [354/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.011 [355/707] Linking target lib/librte_rawdev.so.24.0 00:02:34.011 [356/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.011 [357/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:34.270 [358/707] Linking target lib/librte_member.so.24.0 00:02:34.270 [359/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.270 [360/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:34.270 [361/707] Linking static target lib/librte_reorder.a 00:02:34.270 [362/707] Linking target lib/librte_power.so.24.0 00:02:34.270 [363/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:34.270 [364/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:34.270 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.270 [366/707] Linking target lib/librte_regexdev.so.24.0 00:02:34.270 [367/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:34.529 [368/707] Linking static target lib/librte_rib.a 00:02:34.529 [369/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:34.529 [370/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:34.529 [371/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.529 [372/707] Linking target lib/librte_reorder.so.24.0 00:02:34.529 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:34.529 [374/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:34.529 [375/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:34.529 [376/707] Linking static target lib/librte_stack.a 00:02:34.529 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:34.788 [378/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:34.789 [379/707] Linking static target lib/librte_security.a 00:02:34.789 [380/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.789 [381/707] Linking target lib/librte_stack.so.24.0 00:02:34.789 [382/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.789 [383/707] Linking target lib/librte_rib.so.24.0 00:02:35.047 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:35.048 [385/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:35.048 [386/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.048 [387/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:35.048 [388/707] Linking target lib/librte_mldev.so.24.0 00:02:35.048 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.048 [390/707] Linking target lib/librte_security.so.24.0 00:02:35.048 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:35.306 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:35.306 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:35.307 [394/707] Linking static target lib/librte_sched.a 00:02:35.566 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:35.566 [396/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.566 [397/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:35.566 [398/707] Linking target lib/librte_sched.so.24.0 00:02:35.566 [399/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:35.825 [400/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:35.825 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:35.825 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:35.825 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:36.084 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:36.341 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:36.341 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:36.341 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:36.341 [408/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:36.341 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:36.597 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:36.597 [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:36.597 [412/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:36.597 [413/707] Linking static target lib/librte_ipsec.a 00:02:36.597 [414/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:36.854 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:36.854 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.854 [417/707] Linking target lib/librte_ipsec.so.24.0 00:02:36.854 [418/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:37.111 [419/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:37.111 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:37.111 [421/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:37.370 [422/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:37.370 [423/707] Linking static target lib/librte_fib.a 00:02:37.370 [424/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:37.370 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:37.628 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:37.628 [427/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.628 [428/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:37.628 [429/707] Linking target lib/librte_fib.so.24.0 00:02:37.628 [430/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:37.628 [431/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:37.628 [432/707] Linking static target lib/librte_pdcp.a 00:02:37.886 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.886 [434/707] Linking target lib/librte_pdcp.so.24.0 00:02:38.146 [435/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:38.146 [436/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:38.146 [437/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:38.146 [438/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:38.404 [439/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:38.404 [440/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:38.404 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:38.663 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:38.663 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:38.663 [444/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:38.922 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:38.922 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:38.922 [447/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:38.922 [448/707] Linking static target lib/librte_port.a 00:02:38.922 [449/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:38.922 [450/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:39.181 [451/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:39.181 [452/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:39.181 [453/707] Linking static target lib/librte_pdump.a 00:02:39.441 [454/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.441 [455/707] Linking target lib/librte_port.so.24.0 00:02:39.441 [456/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.441 [457/707] Linking target lib/librte_pdump.so.24.0 00:02:39.441 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:39.701 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:39.701 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:39.701 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:39.701 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:39.961 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:39.961 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:39.961 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:39.961 [466/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:39.961 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:39.961 [468/707] Linking static target lib/librte_table.a 00:02:40.220 [469/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:40.480 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:40.480 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:40.480 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.480 [473/707] Linking target lib/librte_table.so.24.0 00:02:40.739 [474/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:40.739 [475/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:40.739 [476/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:40.739 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:40.998 [478/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:40.998 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:41.258 [480/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:41.258 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:41.258 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:41.258 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:41.519 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:41.519 [485/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:41.519 [486/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:41.779 [487/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:41.779 [488/707] Linking static target lib/librte_graph.a 00:02:41.779 [489/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:41.779 [490/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:42.039 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:42.039 [492/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.039 [493/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:42.299 [494/707] Linking target lib/librte_graph.so.24.0 00:02:42.299 [495/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:42.299 [496/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:42.299 [497/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:42.299 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:42.299 [499/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:42.559 [500/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:42.559 [501/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:42.559 [502/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:42.559 [503/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:42.559 [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:42.819 [505/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:42.819 [506/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:42.819 [507/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:42.819 [508/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:42.819 [509/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:43.079 [510/707] Linking static target lib/librte_node.a 00:02:43.079 [511/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:43.079 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:43.079 [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.339 [514/707] Linking target lib/librte_node.so.24.0 00:02:43.339 [515/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:43.339 [516/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:43.339 [517/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:43.339 [518/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:43.598 [519/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:43.598 [520/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:43.598 [521/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.598 [522/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.598 [523/707] Linking static target drivers/librte_bus_pci.a 00:02:43.598 [524/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:43.598 [525/707] Linking static target drivers/librte_bus_vdev.a 00:02:43.598 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:43.598 [527/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:43.598 [528/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:43.598 [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:43.598 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.858 [531/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:43.858 [532/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:43.858 [533/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:43.858 [534/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:43.858 [535/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.858 [536/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:43.858 [537/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:43.858 [538/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:43.858 [539/707] Linking static target drivers/librte_mempool_ring.a 00:02:43.858 [540/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:43.858 [541/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:43.858 [542/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:44.117 [543/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:44.376 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:44.376 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:44.635 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:44.635 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:45.203 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:45.470 [549/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:45.470 [550/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:45.470 [551/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:45.470 [552/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:45.749 [553/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:45.749 [554/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:45.749 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:46.008 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:46.008 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:46.268 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:46.268 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:46.525 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:46.525 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:46.525 [562/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:46.784 [563/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:47.043 [564/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:47.043 [565/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:47.043 [566/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:47.043 [567/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:47.043 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:47.302 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:47.302 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:47.302 [571/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:47.302 [572/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:47.302 [573/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:47.562 [574/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:47.562 [575/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:47.822 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:47.822 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:47.822 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:47.822 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:47.822 [580/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:48.081 [581/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:48.341 [582/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:48.341 [583/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:48.341 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:48.341 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:48.341 [586/707] Linking static target drivers/librte_net_i40e.a 00:02:48.341 [587/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:48.341 [588/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:48.341 [589/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:48.600 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:48.860 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.860 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:48.860 [593/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:48.860 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:48.860 [595/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:49.120 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:49.120 [597/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:49.120 [598/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:49.379 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:49.379 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:49.639 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:49.639 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:49.639 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:49.639 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:49.900 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:49.900 [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:49.900 [607/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:49.900 [608/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:49.900 [609/707] Linking static target lib/librte_vhost.a 00:02:49.900 [610/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:49.900 [611/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:50.160 [612/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:50.160 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:50.160 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:50.421 [615/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:50.421 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:50.680 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:50.681 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:50.941 [619/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.941 [620/707] Linking target lib/librte_vhost.so.24.0 00:02:51.200 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:51.459 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:51.459 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:51.459 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:51.459 [625/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:51.459 [626/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:51.459 [627/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:51.459 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:51.718 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:51.718 [630/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:51.718 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:51.718 [632/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:51.718 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:51.977 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:51.977 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:51.977 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:51.977 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:52.237 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:52.237 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:52.237 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:52.237 [641/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:52.497 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:52.497 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:52.497 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:52.756 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:52.756 [646/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:52.756 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:52.756 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:52.756 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:53.016 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:53.016 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:53.016 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:53.275 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:53.275 [654/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:53.275 [655/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:53.534 [656/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:53.535 [657/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:53.535 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:53.535 [659/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:53.795 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:54.055 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:54.055 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:54.055 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:54.055 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:54.314 [665/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:54.573 [666/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:54.573 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:54.573 [668/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:54.573 [669/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:54.835 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:54.835 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:55.098 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:55.098 [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:55.358 [674/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:55.358 [675/707] Linking static target lib/librte_pipeline.a 00:02:55.358 [676/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:55.618 [677/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:55.618 [678/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:55.618 [679/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:55.618 [680/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:55.877 [681/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:55.877 [682/707] Linking target app/dpdk-dumpcap 00:02:55.877 [683/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:55.877 [684/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:55.877 [685/707] Linking target app/dpdk-graph 00:02:55.877 [686/707] Linking target app/dpdk-pdump 00:02:55.877 [687/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:56.136 [688/707] Linking target app/dpdk-proc-info 00:02:56.136 [689/707] Linking target app/dpdk-test-acl 00:02:56.136 [690/707] Linking target app/dpdk-test-cmdline 00:02:56.136 [691/707] Linking target app/dpdk-test-bbdev 00:02:56.400 [692/707] Linking target app/dpdk-test-compress-perf 00:02:56.400 [693/707] Linking target app/dpdk-test-crypto-perf 00:02:56.400 [694/707] Linking target app/dpdk-test-dma-perf 00:02:56.400 [695/707] Linking target app/dpdk-test-eventdev 00:02:56.400 [696/707] Linking target app/dpdk-test-fib 00:02:56.400 [697/707] Linking target app/dpdk-test-flow-perf 00:02:56.400 [698/707] Linking target app/dpdk-test-gpudev 00:02:56.664 [699/707] Linking target app/dpdk-test-mldev 00:02:56.664 [700/707] Linking target app/dpdk-test-pipeline 00:02:56.664 [701/707] Linking target app/dpdk-test-regex 00:02:56.664 [702/707] Linking target app/dpdk-test-sad 00:02:56.664 [703/707] Linking target app/dpdk-testpmd 00:02:57.233 [704/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:57.803 [705/707] Linking target app/dpdk-test-security-perf 00:03:00.344 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.604 [707/707] Linking target lib/librte_pipeline.so.24.0 00:03:00.604 16:28:59 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:00.604 16:28:59 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:00.604 16:28:59 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:00.604 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:00.604 [0/1] Installing files. 00:03:00.869 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.869 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.870 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.871 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.872 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:00.873 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:00.874 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:00.874 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.874 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.875 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.449 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.449 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.449 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.449 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:01.449 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.449 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:01.449 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.449 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:01.449 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.449 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:01.449 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.449 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.450 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.451 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:01.452 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:01.452 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:01.452 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:01.452 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:01.452 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:01.452 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:01.452 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:01.452 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:01.452 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:01.452 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:01.452 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:01.452 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:01.452 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:01.452 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:01.452 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:01.452 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:01.452 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:01.452 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:01.452 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:01.452 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:01.452 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:01.452 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:01.452 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:01.452 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:01.452 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:01.452 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:01.452 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:01.452 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:01.452 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:01.452 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:01.452 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:01.452 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:01.452 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:01.452 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:01.452 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:01.452 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:01.452 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:01.452 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:01.452 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:01.452 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:01.453 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:01.453 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:01.453 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:01.453 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:01.453 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:01.453 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:01.453 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:01.453 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:01.453 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:01.453 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:01.453 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:01.453 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:01.453 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:01.453 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:01.453 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:01.453 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:01.453 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:01.453 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:01.453 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:01.453 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:01.453 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:01.453 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:01.453 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:01.453 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:01.453 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:01.453 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:01.453 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:01.453 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:01.453 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:01.453 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:01.453 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:01.453 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:01.453 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:01.453 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:01.453 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:01.453 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:01.453 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:01.453 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:01.453 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:01.453 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:01.453 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:01.453 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:01.453 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:01.453 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:01.453 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:01.453 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:01.453 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:01.453 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:01.453 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:01.453 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:01.453 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:01.453 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:01.453 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:01.453 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:01.453 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:01.453 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:01.453 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:01.453 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:01.453 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:01.453 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:01.453 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:01.453 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:01.453 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:01.453 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:01.453 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:01.453 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:01.453 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:01.453 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:01.453 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:01.453 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:01.453 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:01.453 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:01.453 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:01.453 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:01.453 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:01.453 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:01.453 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:01.453 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:01.453 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:01.453 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:01.453 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:01.453 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:01.453 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:01.453 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:01.453 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:01.453 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:01.453 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:01.453 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:01.453 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:01.453 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:01.453 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:01.453 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:01.454 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:01.454 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:01.454 ************************************ 00:03:01.454 END TEST build_native_dpdk 00:03:01.454 ************************************ 00:03:01.454 16:29:00 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:01.454 16:29:00 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:01.454 00:03:01.454 real 0m49.223s 00:03:01.454 user 5m12.178s 00:03:01.454 sys 0m58.193s 00:03:01.454 16:29:00 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:01.454 16:29:00 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:01.454 16:29:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:01.454 16:29:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:01.454 16:29:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:01.454 16:29:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:01.454 16:29:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:01.454 16:29:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:01.454 16:29:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:01.454 16:29:00 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:01.714 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:01.714 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.714 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:01.714 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:02.284 Using 'verbs' RDMA provider 00:03:18.147 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:36.254 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:36.254 Creating mk/config.mk...done. 00:03:36.254 Creating mk/cc.flags.mk...done. 00:03:36.254 Type 'make' to build. 00:03:36.254 16:29:33 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:36.254 16:29:33 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:36.254 16:29:33 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:36.254 16:29:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:36.254 ************************************ 00:03:36.254 START TEST make 00:03:36.254 ************************************ 00:03:36.254 16:29:33 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:36.254 make[1]: Nothing to be done for 'all'. 00:04:23.029 CC lib/ut/ut.o 00:04:23.029 CC lib/log/log.o 00:04:23.029 CC lib/log/log_deprecated.o 00:04:23.029 CC lib/log/log_flags.o 00:04:23.029 CC lib/ut_mock/mock.o 00:04:23.029 LIB libspdk_log.a 00:04:23.029 LIB libspdk_ut.a 00:04:23.029 LIB libspdk_ut_mock.a 00:04:23.029 SO libspdk_ut.so.2.0 00:04:23.029 SO libspdk_log.so.7.0 00:04:23.029 SO libspdk_ut_mock.so.6.0 00:04:23.029 SYMLINK libspdk_ut.so 00:04:23.029 SYMLINK libspdk_log.so 00:04:23.029 SYMLINK libspdk_ut_mock.so 00:04:23.029 CC lib/util/base64.o 00:04:23.029 CC lib/util/cpuset.o 00:04:23.029 CC lib/util/bit_array.o 00:04:23.029 CC lib/util/crc32.o 00:04:23.029 CC lib/util/crc16.o 00:04:23.029 CC lib/util/crc32c.o 00:04:23.029 CC lib/dma/dma.o 00:04:23.029 CXX lib/trace_parser/trace.o 00:04:23.029 CC lib/ioat/ioat.o 00:04:23.029 CC lib/vfio_user/host/vfio_user_pci.o 00:04:23.029 CC lib/vfio_user/host/vfio_user.o 00:04:23.029 CC lib/util/crc32_ieee.o 00:04:23.029 CC lib/util/crc64.o 00:04:23.029 CC lib/util/dif.o 00:04:23.029 CC lib/util/fd.o 00:04:23.029 LIB libspdk_dma.a 00:04:23.029 CC lib/util/fd_group.o 00:04:23.029 SO libspdk_dma.so.5.0 00:04:23.029 CC lib/util/file.o 00:04:23.029 CC lib/util/hexlify.o 00:04:23.029 SYMLINK libspdk_dma.so 00:04:23.029 LIB libspdk_ioat.a 00:04:23.029 CC lib/util/iov.o 00:04:23.029 CC lib/util/math.o 00:04:23.029 SO libspdk_ioat.so.7.0 00:04:23.029 CC lib/util/net.o 00:04:23.029 LIB libspdk_vfio_user.a 00:04:23.029 CC lib/util/pipe.o 00:04:23.029 SO libspdk_vfio_user.so.5.0 00:04:23.029 SYMLINK libspdk_ioat.so 00:04:23.029 CC lib/util/strerror_tls.o 00:04:23.029 CC lib/util/string.o 00:04:23.029 SYMLINK libspdk_vfio_user.so 00:04:23.029 CC lib/util/uuid.o 00:04:23.029 CC lib/util/xor.o 00:04:23.029 CC lib/util/zipf.o 00:04:23.029 CC lib/util/md5.o 00:04:23.289 LIB libspdk_util.a 00:04:23.289 LIB libspdk_trace_parser.a 00:04:23.289 SO libspdk_util.so.10.0 00:04:23.289 SO libspdk_trace_parser.so.6.0 00:04:23.550 SYMLINK libspdk_trace_parser.so 00:04:23.550 SYMLINK libspdk_util.so 00:04:23.550 CC lib/conf/conf.o 00:04:23.550 CC lib/rdma_provider/common.o 00:04:23.550 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:23.550 CC lib/vmd/vmd.o 00:04:23.809 CC lib/vmd/led.o 00:04:23.809 CC lib/json/json_util.o 00:04:23.809 CC lib/json/json_parse.o 00:04:23.809 CC lib/rdma_utils/rdma_utils.o 00:04:23.809 CC lib/env_dpdk/env.o 00:04:23.809 CC lib/idxd/idxd.o 00:04:23.809 CC lib/idxd/idxd_user.o 00:04:23.809 CC lib/idxd/idxd_kernel.o 00:04:23.809 LIB libspdk_rdma_provider.a 00:04:23.809 CC lib/json/json_write.o 00:04:23.809 LIB libspdk_conf.a 00:04:23.809 CC lib/env_dpdk/memory.o 00:04:23.809 SO libspdk_rdma_provider.so.6.0 00:04:23.809 SO libspdk_conf.so.6.0 00:04:24.070 LIB libspdk_rdma_utils.a 00:04:24.070 SYMLINK libspdk_rdma_provider.so 00:04:24.070 SO libspdk_rdma_utils.so.1.0 00:04:24.070 CC lib/env_dpdk/pci.o 00:04:24.070 SYMLINK libspdk_conf.so 00:04:24.070 CC lib/env_dpdk/init.o 00:04:24.070 SYMLINK libspdk_rdma_utils.so 00:04:24.070 CC lib/env_dpdk/threads.o 00:04:24.070 CC lib/env_dpdk/pci_ioat.o 00:04:24.070 CC lib/env_dpdk/pci_virtio.o 00:04:24.070 LIB libspdk_json.a 00:04:24.070 CC lib/env_dpdk/pci_vmd.o 00:04:24.329 SO libspdk_json.so.6.0 00:04:24.329 CC lib/env_dpdk/pci_idxd.o 00:04:24.329 CC lib/env_dpdk/pci_event.o 00:04:24.329 SYMLINK libspdk_json.so 00:04:24.329 CC lib/env_dpdk/sigbus_handler.o 00:04:24.329 CC lib/env_dpdk/pci_dpdk.o 00:04:24.329 LIB libspdk_idxd.a 00:04:24.329 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:24.329 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:24.329 SO libspdk_idxd.so.12.1 00:04:24.329 SYMLINK libspdk_idxd.so 00:04:24.589 LIB libspdk_vmd.a 00:04:24.589 SO libspdk_vmd.so.6.0 00:04:24.589 SYMLINK libspdk_vmd.so 00:04:24.589 CC lib/jsonrpc/jsonrpc_server.o 00:04:24.589 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:24.589 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:24.589 CC lib/jsonrpc/jsonrpc_client.o 00:04:24.850 LIB libspdk_jsonrpc.a 00:04:25.110 SO libspdk_jsonrpc.so.6.0 00:04:25.110 SYMLINK libspdk_jsonrpc.so 00:04:25.110 LIB libspdk_env_dpdk.a 00:04:25.371 SO libspdk_env_dpdk.so.15.0 00:04:25.371 SYMLINK libspdk_env_dpdk.so 00:04:25.371 CC lib/rpc/rpc.o 00:04:25.632 LIB libspdk_rpc.a 00:04:25.893 SO libspdk_rpc.so.6.0 00:04:25.893 SYMLINK libspdk_rpc.so 00:04:26.153 CC lib/notify/notify.o 00:04:26.153 CC lib/notify/notify_rpc.o 00:04:26.153 CC lib/trace/trace.o 00:04:26.153 CC lib/trace/trace_flags.o 00:04:26.153 CC lib/trace/trace_rpc.o 00:04:26.153 CC lib/keyring/keyring.o 00:04:26.153 CC lib/keyring/keyring_rpc.o 00:04:26.413 LIB libspdk_notify.a 00:04:26.413 SO libspdk_notify.so.6.0 00:04:26.413 SYMLINK libspdk_notify.so 00:04:26.413 LIB libspdk_trace.a 00:04:26.413 LIB libspdk_keyring.a 00:04:26.673 SO libspdk_keyring.so.2.0 00:04:26.673 SO libspdk_trace.so.11.0 00:04:26.673 SYMLINK libspdk_keyring.so 00:04:26.673 SYMLINK libspdk_trace.so 00:04:26.932 CC lib/thread/thread.o 00:04:26.932 CC lib/thread/iobuf.o 00:04:26.932 CC lib/sock/sock.o 00:04:26.932 CC lib/sock/sock_rpc.o 00:04:27.500 LIB libspdk_sock.a 00:04:27.500 SO libspdk_sock.so.10.0 00:04:27.500 SYMLINK libspdk_sock.so 00:04:28.070 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:28.070 CC lib/nvme/nvme_ctrlr.o 00:04:28.070 CC lib/nvme/nvme_fabric.o 00:04:28.070 CC lib/nvme/nvme_ns_cmd.o 00:04:28.070 CC lib/nvme/nvme_ns.o 00:04:28.070 CC lib/nvme/nvme_pcie_common.o 00:04:28.070 CC lib/nvme/nvme_pcie.o 00:04:28.070 CC lib/nvme/nvme_qpair.o 00:04:28.070 CC lib/nvme/nvme.o 00:04:28.641 CC lib/nvme/nvme_quirks.o 00:04:28.641 CC lib/nvme/nvme_transport.o 00:04:28.641 LIB libspdk_thread.a 00:04:28.641 CC lib/nvme/nvme_discovery.o 00:04:28.641 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:28.641 SO libspdk_thread.so.10.1 00:04:28.900 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:28.900 SYMLINK libspdk_thread.so 00:04:28.900 CC lib/nvme/nvme_tcp.o 00:04:28.900 CC lib/nvme/nvme_opal.o 00:04:28.900 CC lib/nvme/nvme_io_msg.o 00:04:29.159 CC lib/nvme/nvme_poll_group.o 00:04:29.159 CC lib/nvme/nvme_zns.o 00:04:29.159 CC lib/nvme/nvme_stubs.o 00:04:29.418 CC lib/nvme/nvme_auth.o 00:04:29.418 CC lib/nvme/nvme_cuse.o 00:04:29.418 CC lib/accel/accel.o 00:04:29.418 CC lib/blob/blobstore.o 00:04:29.678 CC lib/blob/request.o 00:04:29.678 CC lib/init/json_config.o 00:04:29.937 CC lib/virtio/virtio.o 00:04:29.937 CC lib/fsdev/fsdev.o 00:04:29.937 CC lib/init/subsystem.o 00:04:29.937 CC lib/init/subsystem_rpc.o 00:04:30.195 CC lib/nvme/nvme_rdma.o 00:04:30.195 CC lib/init/rpc.o 00:04:30.195 CC lib/virtio/virtio_vhost_user.o 00:04:30.195 CC lib/accel/accel_rpc.o 00:04:30.195 CC lib/accel/accel_sw.o 00:04:30.455 LIB libspdk_init.a 00:04:30.455 SO libspdk_init.so.6.0 00:04:30.455 SYMLINK libspdk_init.so 00:04:30.455 CC lib/blob/zeroes.o 00:04:30.455 CC lib/blob/blob_bs_dev.o 00:04:30.455 CC lib/fsdev/fsdev_io.o 00:04:30.455 CC lib/virtio/virtio_vfio_user.o 00:04:30.715 CC lib/virtio/virtio_pci.o 00:04:30.715 CC lib/fsdev/fsdev_rpc.o 00:04:30.974 CC lib/event/app.o 00:04:30.974 CC lib/event/reactor.o 00:04:30.974 CC lib/event/log_rpc.o 00:04:30.974 CC lib/event/app_rpc.o 00:04:30.974 CC lib/event/scheduler_static.o 00:04:30.974 LIB libspdk_fsdev.a 00:04:30.974 LIB libspdk_accel.a 00:04:30.974 SO libspdk_fsdev.so.1.0 00:04:30.974 LIB libspdk_virtio.a 00:04:30.974 SO libspdk_accel.so.16.0 00:04:30.974 SO libspdk_virtio.so.7.0 00:04:30.974 SYMLINK libspdk_fsdev.so 00:04:30.974 SYMLINK libspdk_accel.so 00:04:30.974 SYMLINK libspdk_virtio.so 00:04:31.233 CC lib/bdev/bdev.o 00:04:31.233 CC lib/bdev/bdev_rpc.o 00:04:31.233 CC lib/bdev/scsi_nvme.o 00:04:31.233 CC lib/bdev/bdev_zone.o 00:04:31.233 CC lib/bdev/part.o 00:04:31.233 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:31.233 LIB libspdk_event.a 00:04:31.492 SO libspdk_event.so.14.0 00:04:31.492 SYMLINK libspdk_event.so 00:04:31.492 LIB libspdk_nvme.a 00:04:31.750 SO libspdk_nvme.so.14.0 00:04:32.009 LIB libspdk_fuse_dispatcher.a 00:04:32.009 SYMLINK libspdk_nvme.so 00:04:32.009 SO libspdk_fuse_dispatcher.so.1.0 00:04:32.009 SYMLINK libspdk_fuse_dispatcher.so 00:04:33.909 LIB libspdk_blob.a 00:04:33.909 SO libspdk_blob.so.11.0 00:04:33.909 SYMLINK libspdk_blob.so 00:04:34.167 CC lib/lvol/lvol.o 00:04:34.167 CC lib/blobfs/blobfs.o 00:04:34.167 CC lib/blobfs/tree.o 00:04:34.425 LIB libspdk_bdev.a 00:04:34.425 SO libspdk_bdev.so.16.0 00:04:34.684 SYMLINK libspdk_bdev.so 00:04:34.950 CC lib/nvmf/ctrlr.o 00:04:34.950 CC lib/nvmf/ctrlr_discovery.o 00:04:34.950 CC lib/nvmf/ctrlr_bdev.o 00:04:34.950 CC lib/nvmf/subsystem.o 00:04:34.950 CC lib/nbd/nbd.o 00:04:34.950 CC lib/scsi/dev.o 00:04:34.950 CC lib/ublk/ublk.o 00:04:34.950 CC lib/ftl/ftl_core.o 00:04:35.222 LIB libspdk_lvol.a 00:04:35.222 CC lib/scsi/lun.o 00:04:35.222 SO libspdk_lvol.so.10.0 00:04:35.222 LIB libspdk_blobfs.a 00:04:35.222 SO libspdk_blobfs.so.10.0 00:04:35.222 SYMLINK libspdk_lvol.so 00:04:35.222 CC lib/ftl/ftl_init.o 00:04:35.222 SYMLINK libspdk_blobfs.so 00:04:35.222 CC lib/ublk/ublk_rpc.o 00:04:35.222 CC lib/nbd/nbd_rpc.o 00:04:35.222 CC lib/nvmf/nvmf.o 00:04:35.483 CC lib/nvmf/nvmf_rpc.o 00:04:35.483 CC lib/scsi/port.o 00:04:35.483 CC lib/ftl/ftl_layout.o 00:04:35.483 CC lib/scsi/scsi.o 00:04:35.483 LIB libspdk_nbd.a 00:04:35.483 SO libspdk_nbd.so.7.0 00:04:35.742 SYMLINK libspdk_nbd.so 00:04:35.742 CC lib/nvmf/transport.o 00:04:35.742 CC lib/nvmf/tcp.o 00:04:35.742 CC lib/scsi/scsi_bdev.o 00:04:35.742 LIB libspdk_ublk.a 00:04:35.742 SO libspdk_ublk.so.3.0 00:04:35.742 SYMLINK libspdk_ublk.so 00:04:35.742 CC lib/nvmf/stubs.o 00:04:35.742 CC lib/nvmf/mdns_server.o 00:04:35.742 CC lib/ftl/ftl_debug.o 00:04:36.309 CC lib/ftl/ftl_io.o 00:04:36.309 CC lib/scsi/scsi_pr.o 00:04:36.309 CC lib/nvmf/rdma.o 00:04:36.309 CC lib/nvmf/auth.o 00:04:36.309 CC lib/scsi/scsi_rpc.o 00:04:36.309 CC lib/scsi/task.o 00:04:36.567 CC lib/ftl/ftl_sb.o 00:04:36.567 CC lib/ftl/ftl_l2p.o 00:04:36.567 CC lib/ftl/ftl_l2p_flat.o 00:04:36.567 CC lib/ftl/ftl_nv_cache.o 00:04:36.567 CC lib/ftl/ftl_band.o 00:04:36.567 LIB libspdk_scsi.a 00:04:36.567 CC lib/ftl/ftl_band_ops.o 00:04:36.567 CC lib/ftl/ftl_writer.o 00:04:36.567 SO libspdk_scsi.so.9.0 00:04:36.826 CC lib/ftl/ftl_rq.o 00:04:36.826 SYMLINK libspdk_scsi.so 00:04:36.826 CC lib/ftl/ftl_reloc.o 00:04:36.826 CC lib/ftl/ftl_l2p_cache.o 00:04:37.084 CC lib/ftl/ftl_p2l.o 00:04:37.084 CC lib/ftl/ftl_p2l_log.o 00:04:37.084 CC lib/ftl/mngt/ftl_mngt.o 00:04:37.341 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:37.341 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:37.341 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:37.341 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:37.341 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:37.341 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:37.341 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:37.598 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:37.598 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:37.598 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:37.598 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:37.598 CC lib/iscsi/conn.o 00:04:37.598 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:37.856 CC lib/vhost/vhost.o 00:04:37.856 CC lib/ftl/utils/ftl_conf.o 00:04:37.856 CC lib/ftl/utils/ftl_md.o 00:04:37.856 CC lib/iscsi/init_grp.o 00:04:37.856 CC lib/ftl/utils/ftl_mempool.o 00:04:37.856 CC lib/vhost/vhost_rpc.o 00:04:37.856 CC lib/vhost/vhost_scsi.o 00:04:37.856 CC lib/ftl/utils/ftl_bitmap.o 00:04:38.114 CC lib/vhost/vhost_blk.o 00:04:38.114 CC lib/iscsi/iscsi.o 00:04:38.114 CC lib/vhost/rte_vhost_user.o 00:04:38.114 CC lib/iscsi/param.o 00:04:38.372 CC lib/ftl/utils/ftl_property.o 00:04:38.372 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:38.629 CC lib/iscsi/portal_grp.o 00:04:38.629 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:38.629 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:38.629 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:38.629 CC lib/iscsi/tgt_node.o 00:04:38.887 CC lib/iscsi/iscsi_subsystem.o 00:04:38.887 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:38.887 CC lib/iscsi/iscsi_rpc.o 00:04:38.887 CC lib/iscsi/task.o 00:04:38.887 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:39.146 LIB libspdk_nvmf.a 00:04:39.146 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:39.146 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:39.146 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:39.146 SO libspdk_nvmf.so.19.0 00:04:39.146 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:39.146 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:39.146 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:39.404 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:39.404 CC lib/ftl/base/ftl_base_dev.o 00:04:39.404 CC lib/ftl/base/ftl_base_bdev.o 00:04:39.404 CC lib/ftl/ftl_trace.o 00:04:39.404 LIB libspdk_vhost.a 00:04:39.404 SYMLINK libspdk_nvmf.so 00:04:39.404 SO libspdk_vhost.so.8.0 00:04:39.662 SYMLINK libspdk_vhost.so 00:04:39.662 LIB libspdk_ftl.a 00:04:39.920 LIB libspdk_iscsi.a 00:04:39.920 SO libspdk_ftl.so.9.0 00:04:39.920 SO libspdk_iscsi.so.8.0 00:04:40.179 SYMLINK libspdk_iscsi.so 00:04:40.179 SYMLINK libspdk_ftl.so 00:04:40.746 CC module/env_dpdk/env_dpdk_rpc.o 00:04:40.746 CC module/accel/error/accel_error.o 00:04:40.746 CC module/keyring/linux/keyring.o 00:04:40.746 CC module/fsdev/aio/fsdev_aio.o 00:04:40.746 CC module/sock/posix/posix.o 00:04:40.746 CC module/blob/bdev/blob_bdev.o 00:04:40.746 CC module/accel/dsa/accel_dsa.o 00:04:40.746 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:40.746 CC module/keyring/file/keyring.o 00:04:40.746 CC module/accel/ioat/accel_ioat.o 00:04:40.746 LIB libspdk_env_dpdk_rpc.a 00:04:40.746 SO libspdk_env_dpdk_rpc.so.6.0 00:04:40.746 CC module/keyring/linux/keyring_rpc.o 00:04:40.746 SYMLINK libspdk_env_dpdk_rpc.so 00:04:40.746 CC module/accel/ioat/accel_ioat_rpc.o 00:04:40.746 CC module/keyring/file/keyring_rpc.o 00:04:41.005 CC module/accel/error/accel_error_rpc.o 00:04:41.005 LIB libspdk_scheduler_dynamic.a 00:04:41.005 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:41.005 SO libspdk_scheduler_dynamic.so.4.0 00:04:41.005 LIB libspdk_keyring_linux.a 00:04:41.005 LIB libspdk_blob_bdev.a 00:04:41.005 SO libspdk_keyring_linux.so.1.0 00:04:41.005 LIB libspdk_accel_ioat.a 00:04:41.005 LIB libspdk_keyring_file.a 00:04:41.005 SYMLINK libspdk_scheduler_dynamic.so 00:04:41.005 SO libspdk_blob_bdev.so.11.0 00:04:41.005 SO libspdk_keyring_file.so.2.0 00:04:41.005 CC module/accel/dsa/accel_dsa_rpc.o 00:04:41.005 SO libspdk_accel_ioat.so.6.0 00:04:41.005 LIB libspdk_accel_error.a 00:04:41.005 SYMLINK libspdk_keyring_linux.so 00:04:41.005 SYMLINK libspdk_blob_bdev.so 00:04:41.005 SO libspdk_accel_error.so.2.0 00:04:41.005 SYMLINK libspdk_keyring_file.so 00:04:41.005 SYMLINK libspdk_accel_ioat.so 00:04:41.005 CC module/fsdev/aio/linux_aio_mgr.o 00:04:41.005 SYMLINK libspdk_accel_error.so 00:04:41.263 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:41.263 LIB libspdk_accel_dsa.a 00:04:41.263 SO libspdk_accel_dsa.so.5.0 00:04:41.263 CC module/scheduler/gscheduler/gscheduler.o 00:04:41.263 CC module/accel/iaa/accel_iaa.o 00:04:41.263 SYMLINK libspdk_accel_dsa.so 00:04:41.263 CC module/accel/iaa/accel_iaa_rpc.o 00:04:41.263 LIB libspdk_scheduler_dpdk_governor.a 00:04:41.522 CC module/bdev/error/vbdev_error.o 00:04:41.522 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:41.522 CC module/bdev/delay/vbdev_delay.o 00:04:41.522 CC module/blobfs/bdev/blobfs_bdev.o 00:04:41.522 LIB libspdk_fsdev_aio.a 00:04:41.522 LIB libspdk_scheduler_gscheduler.a 00:04:41.522 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:41.522 CC module/bdev/error/vbdev_error_rpc.o 00:04:41.522 SO libspdk_fsdev_aio.so.1.0 00:04:41.522 SO libspdk_scheduler_gscheduler.so.4.0 00:04:41.522 CC module/bdev/gpt/gpt.o 00:04:41.522 LIB libspdk_accel_iaa.a 00:04:41.522 SO libspdk_accel_iaa.so.3.0 00:04:41.522 SYMLINK libspdk_scheduler_gscheduler.so 00:04:41.522 SYMLINK libspdk_fsdev_aio.so 00:04:41.522 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:41.522 LIB libspdk_sock_posix.a 00:04:41.522 SO libspdk_sock_posix.so.6.0 00:04:41.522 SYMLINK libspdk_accel_iaa.so 00:04:41.781 CC module/bdev/lvol/vbdev_lvol.o 00:04:41.781 LIB libspdk_bdev_error.a 00:04:41.781 CC module/bdev/gpt/vbdev_gpt.o 00:04:41.781 SYMLINK libspdk_sock_posix.so 00:04:41.781 CC module/bdev/malloc/bdev_malloc.o 00:04:41.781 SO libspdk_bdev_error.so.6.0 00:04:41.781 LIB libspdk_blobfs_bdev.a 00:04:41.781 SO libspdk_blobfs_bdev.so.6.0 00:04:41.781 CC module/bdev/nvme/bdev_nvme.o 00:04:41.781 CC module/bdev/passthru/vbdev_passthru.o 00:04:41.781 CC module/bdev/null/bdev_null.o 00:04:41.781 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:41.781 SYMLINK libspdk_bdev_error.so 00:04:41.781 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:41.781 SYMLINK libspdk_blobfs_bdev.so 00:04:41.781 CC module/bdev/raid/bdev_raid.o 00:04:42.040 CC module/bdev/split/vbdev_split.o 00:04:42.040 LIB libspdk_bdev_delay.a 00:04:42.040 LIB libspdk_bdev_gpt.a 00:04:42.040 SO libspdk_bdev_delay.so.6.0 00:04:42.040 SO libspdk_bdev_gpt.so.6.0 00:04:42.040 CC module/bdev/null/bdev_null_rpc.o 00:04:42.040 SYMLINK libspdk_bdev_delay.so 00:04:42.040 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:42.040 SYMLINK libspdk_bdev_gpt.so 00:04:42.040 CC module/bdev/split/vbdev_split_rpc.o 00:04:42.299 LIB libspdk_bdev_passthru.a 00:04:42.299 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:42.299 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:42.299 SO libspdk_bdev_passthru.so.6.0 00:04:42.299 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:42.299 SYMLINK libspdk_bdev_passthru.so 00:04:42.299 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:42.299 CC module/bdev/nvme/nvme_rpc.o 00:04:42.299 LIB libspdk_bdev_null.a 00:04:42.299 LIB libspdk_bdev_split.a 00:04:42.299 SO libspdk_bdev_null.so.6.0 00:04:42.299 LIB libspdk_bdev_malloc.a 00:04:42.299 SO libspdk_bdev_split.so.6.0 00:04:42.299 SO libspdk_bdev_malloc.so.6.0 00:04:42.557 SYMLINK libspdk_bdev_null.so 00:04:42.557 CC module/bdev/nvme/bdev_mdns_client.o 00:04:42.557 CC module/bdev/nvme/vbdev_opal.o 00:04:42.557 SYMLINK libspdk_bdev_split.so 00:04:42.557 SYMLINK libspdk_bdev_malloc.so 00:04:42.557 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:42.557 LIB libspdk_bdev_zone_block.a 00:04:42.557 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:42.557 CC module/bdev/aio/bdev_aio.o 00:04:42.557 SO libspdk_bdev_zone_block.so.6.0 00:04:42.557 CC module/bdev/ftl/bdev_ftl.o 00:04:42.557 LIB libspdk_bdev_lvol.a 00:04:42.816 SO libspdk_bdev_lvol.so.6.0 00:04:42.816 SYMLINK libspdk_bdev_zone_block.so 00:04:42.816 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:42.816 CC module/bdev/aio/bdev_aio_rpc.o 00:04:42.816 CC module/bdev/raid/bdev_raid_rpc.o 00:04:42.816 CC module/bdev/raid/bdev_raid_sb.o 00:04:42.816 SYMLINK libspdk_bdev_lvol.so 00:04:42.816 CC module/bdev/raid/raid0.o 00:04:42.816 CC module/bdev/raid/raid1.o 00:04:43.075 LIB libspdk_bdev_ftl.a 00:04:43.075 LIB libspdk_bdev_aio.a 00:04:43.075 SO libspdk_bdev_ftl.so.6.0 00:04:43.075 CC module/bdev/iscsi/bdev_iscsi.o 00:04:43.075 SO libspdk_bdev_aio.so.6.0 00:04:43.075 SYMLINK libspdk_bdev_ftl.so 00:04:43.075 CC module/bdev/raid/concat.o 00:04:43.075 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:43.075 SYMLINK libspdk_bdev_aio.so 00:04:43.075 CC module/bdev/raid/raid5f.o 00:04:43.075 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:43.075 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:43.075 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:43.643 LIB libspdk_bdev_iscsi.a 00:04:43.643 SO libspdk_bdev_iscsi.so.6.0 00:04:43.643 SYMLINK libspdk_bdev_iscsi.so 00:04:43.643 LIB libspdk_bdev_raid.a 00:04:43.923 LIB libspdk_bdev_virtio.a 00:04:43.923 SO libspdk_bdev_raid.so.6.0 00:04:43.923 SO libspdk_bdev_virtio.so.6.0 00:04:43.923 SYMLINK libspdk_bdev_raid.so 00:04:43.923 SYMLINK libspdk_bdev_virtio.so 00:04:44.863 LIB libspdk_bdev_nvme.a 00:04:44.863 SO libspdk_bdev_nvme.so.7.0 00:04:44.863 SYMLINK libspdk_bdev_nvme.so 00:04:45.433 CC module/event/subsystems/iobuf/iobuf.o 00:04:45.433 CC module/event/subsystems/vmd/vmd.o 00:04:45.433 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:45.433 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:45.433 CC module/event/subsystems/scheduler/scheduler.o 00:04:45.433 CC module/event/subsystems/keyring/keyring.o 00:04:45.433 CC module/event/subsystems/fsdev/fsdev.o 00:04:45.433 CC module/event/subsystems/sock/sock.o 00:04:45.433 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:45.693 LIB libspdk_event_keyring.a 00:04:45.694 SO libspdk_event_keyring.so.1.0 00:04:45.694 LIB libspdk_event_vmd.a 00:04:45.694 LIB libspdk_event_iobuf.a 00:04:45.694 LIB libspdk_event_fsdev.a 00:04:45.694 LIB libspdk_event_scheduler.a 00:04:45.694 LIB libspdk_event_sock.a 00:04:45.694 LIB libspdk_event_vhost_blk.a 00:04:45.694 SO libspdk_event_scheduler.so.4.0 00:04:45.694 SO libspdk_event_fsdev.so.1.0 00:04:45.694 SO libspdk_event_iobuf.so.3.0 00:04:45.694 SO libspdk_event_vmd.so.6.0 00:04:45.694 SYMLINK libspdk_event_keyring.so 00:04:45.694 SO libspdk_event_sock.so.5.0 00:04:45.694 SO libspdk_event_vhost_blk.so.3.0 00:04:45.694 SYMLINK libspdk_event_scheduler.so 00:04:45.694 SYMLINK libspdk_event_vmd.so 00:04:45.694 SYMLINK libspdk_event_iobuf.so 00:04:45.694 SYMLINK libspdk_event_fsdev.so 00:04:45.694 SYMLINK libspdk_event_sock.so 00:04:45.694 SYMLINK libspdk_event_vhost_blk.so 00:04:46.264 CC module/event/subsystems/accel/accel.o 00:04:46.264 LIB libspdk_event_accel.a 00:04:46.264 SO libspdk_event_accel.so.6.0 00:04:46.264 SYMLINK libspdk_event_accel.so 00:04:46.832 CC module/event/subsystems/bdev/bdev.o 00:04:47.091 LIB libspdk_event_bdev.a 00:04:47.091 SO libspdk_event_bdev.so.6.0 00:04:47.091 SYMLINK libspdk_event_bdev.so 00:04:47.350 CC module/event/subsystems/scsi/scsi.o 00:04:47.350 CC module/event/subsystems/nbd/nbd.o 00:04:47.350 CC module/event/subsystems/ublk/ublk.o 00:04:47.350 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:47.350 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:47.609 LIB libspdk_event_ublk.a 00:04:47.609 LIB libspdk_event_scsi.a 00:04:47.609 LIB libspdk_event_nbd.a 00:04:47.609 SO libspdk_event_scsi.so.6.0 00:04:47.609 SO libspdk_event_ublk.so.3.0 00:04:47.609 SO libspdk_event_nbd.so.6.0 00:04:47.609 SYMLINK libspdk_event_scsi.so 00:04:47.609 SYMLINK libspdk_event_ublk.so 00:04:47.609 SYMLINK libspdk_event_nbd.so 00:04:47.609 LIB libspdk_event_nvmf.a 00:04:47.609 SO libspdk_event_nvmf.so.6.0 00:04:47.869 SYMLINK libspdk_event_nvmf.so 00:04:47.869 CC module/event/subsystems/iscsi/iscsi.o 00:04:47.869 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:48.128 LIB libspdk_event_iscsi.a 00:04:48.128 LIB libspdk_event_vhost_scsi.a 00:04:48.128 SO libspdk_event_iscsi.so.6.0 00:04:48.128 SO libspdk_event_vhost_scsi.so.3.0 00:04:48.387 SYMLINK libspdk_event_vhost_scsi.so 00:04:48.387 SYMLINK libspdk_event_iscsi.so 00:04:48.387 SO libspdk.so.6.0 00:04:48.387 SYMLINK libspdk.so 00:04:48.957 CC test/rpc_client/rpc_client_test.o 00:04:48.957 CXX app/trace/trace.o 00:04:48.957 TEST_HEADER include/spdk/accel.h 00:04:48.957 TEST_HEADER include/spdk/accel_module.h 00:04:48.957 TEST_HEADER include/spdk/assert.h 00:04:48.957 CC app/trace_record/trace_record.o 00:04:48.957 TEST_HEADER include/spdk/barrier.h 00:04:48.957 TEST_HEADER include/spdk/base64.h 00:04:48.957 TEST_HEADER include/spdk/bdev.h 00:04:48.957 TEST_HEADER include/spdk/bdev_module.h 00:04:48.957 TEST_HEADER include/spdk/bdev_zone.h 00:04:48.957 TEST_HEADER include/spdk/bit_array.h 00:04:48.957 TEST_HEADER include/spdk/bit_pool.h 00:04:48.957 TEST_HEADER include/spdk/blob_bdev.h 00:04:48.957 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:48.957 TEST_HEADER include/spdk/blobfs.h 00:04:48.957 TEST_HEADER include/spdk/blob.h 00:04:48.957 TEST_HEADER include/spdk/conf.h 00:04:48.957 TEST_HEADER include/spdk/config.h 00:04:48.957 TEST_HEADER include/spdk/cpuset.h 00:04:48.957 TEST_HEADER include/spdk/crc16.h 00:04:48.957 TEST_HEADER include/spdk/crc32.h 00:04:48.957 TEST_HEADER include/spdk/crc64.h 00:04:48.957 TEST_HEADER include/spdk/dif.h 00:04:48.957 TEST_HEADER include/spdk/dma.h 00:04:48.957 TEST_HEADER include/spdk/endian.h 00:04:48.957 TEST_HEADER include/spdk/env_dpdk.h 00:04:48.957 TEST_HEADER include/spdk/env.h 00:04:48.957 TEST_HEADER include/spdk/event.h 00:04:48.957 TEST_HEADER include/spdk/fd_group.h 00:04:48.957 TEST_HEADER include/spdk/fd.h 00:04:48.957 TEST_HEADER include/spdk/file.h 00:04:48.957 TEST_HEADER include/spdk/fsdev.h 00:04:48.957 TEST_HEADER include/spdk/fsdev_module.h 00:04:48.957 TEST_HEADER include/spdk/ftl.h 00:04:48.957 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:48.957 TEST_HEADER include/spdk/gpt_spec.h 00:04:48.957 CC test/thread/poller_perf/poller_perf.o 00:04:48.957 TEST_HEADER include/spdk/hexlify.h 00:04:48.957 CC examples/util/zipf/zipf.o 00:04:48.957 CC examples/ioat/perf/perf.o 00:04:48.957 TEST_HEADER include/spdk/histogram_data.h 00:04:48.957 TEST_HEADER include/spdk/idxd.h 00:04:48.957 TEST_HEADER include/spdk/idxd_spec.h 00:04:48.957 TEST_HEADER include/spdk/init.h 00:04:48.957 TEST_HEADER include/spdk/ioat.h 00:04:48.957 TEST_HEADER include/spdk/ioat_spec.h 00:04:48.957 TEST_HEADER include/spdk/iscsi_spec.h 00:04:48.957 TEST_HEADER include/spdk/json.h 00:04:48.957 TEST_HEADER include/spdk/jsonrpc.h 00:04:48.957 TEST_HEADER include/spdk/keyring.h 00:04:48.957 TEST_HEADER include/spdk/keyring_module.h 00:04:48.957 TEST_HEADER include/spdk/likely.h 00:04:48.957 TEST_HEADER include/spdk/log.h 00:04:48.957 CC test/dma/test_dma/test_dma.o 00:04:48.957 TEST_HEADER include/spdk/lvol.h 00:04:48.957 TEST_HEADER include/spdk/md5.h 00:04:48.957 TEST_HEADER include/spdk/memory.h 00:04:48.957 TEST_HEADER include/spdk/mmio.h 00:04:48.957 TEST_HEADER include/spdk/nbd.h 00:04:48.957 TEST_HEADER include/spdk/net.h 00:04:48.957 TEST_HEADER include/spdk/notify.h 00:04:48.957 TEST_HEADER include/spdk/nvme.h 00:04:48.957 TEST_HEADER include/spdk/nvme_intel.h 00:04:48.957 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:48.957 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:48.957 TEST_HEADER include/spdk/nvme_spec.h 00:04:48.957 TEST_HEADER include/spdk/nvme_zns.h 00:04:48.957 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:48.957 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:48.957 TEST_HEADER include/spdk/nvmf.h 00:04:48.957 TEST_HEADER include/spdk/nvmf_spec.h 00:04:48.957 TEST_HEADER include/spdk/nvmf_transport.h 00:04:48.957 CC test/app/bdev_svc/bdev_svc.o 00:04:48.957 TEST_HEADER include/spdk/opal.h 00:04:48.957 TEST_HEADER include/spdk/opal_spec.h 00:04:48.957 CC test/env/mem_callbacks/mem_callbacks.o 00:04:48.957 TEST_HEADER include/spdk/pci_ids.h 00:04:48.957 TEST_HEADER include/spdk/pipe.h 00:04:48.957 TEST_HEADER include/spdk/queue.h 00:04:48.957 TEST_HEADER include/spdk/reduce.h 00:04:48.957 TEST_HEADER include/spdk/rpc.h 00:04:48.957 TEST_HEADER include/spdk/scheduler.h 00:04:48.957 TEST_HEADER include/spdk/scsi.h 00:04:48.957 TEST_HEADER include/spdk/scsi_spec.h 00:04:48.957 TEST_HEADER include/spdk/sock.h 00:04:48.957 TEST_HEADER include/spdk/stdinc.h 00:04:48.957 TEST_HEADER include/spdk/string.h 00:04:48.957 TEST_HEADER include/spdk/thread.h 00:04:48.957 TEST_HEADER include/spdk/trace.h 00:04:48.957 TEST_HEADER include/spdk/trace_parser.h 00:04:48.957 TEST_HEADER include/spdk/tree.h 00:04:48.957 TEST_HEADER include/spdk/ublk.h 00:04:48.957 TEST_HEADER include/spdk/util.h 00:04:48.957 TEST_HEADER include/spdk/uuid.h 00:04:48.957 TEST_HEADER include/spdk/version.h 00:04:48.957 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:48.957 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:48.957 TEST_HEADER include/spdk/vhost.h 00:04:48.957 TEST_HEADER include/spdk/vmd.h 00:04:48.957 TEST_HEADER include/spdk/xor.h 00:04:48.957 TEST_HEADER include/spdk/zipf.h 00:04:48.957 CXX test/cpp_headers/accel.o 00:04:48.957 LINK rpc_client_test 00:04:48.957 LINK poller_perf 00:04:48.957 LINK zipf 00:04:49.217 LINK ioat_perf 00:04:49.217 LINK spdk_trace_record 00:04:49.217 LINK bdev_svc 00:04:49.217 CXX test/cpp_headers/accel_module.o 00:04:49.217 LINK spdk_trace 00:04:49.217 CC test/env/vtophys/vtophys.o 00:04:49.217 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:49.217 CC examples/ioat/verify/verify.o 00:04:49.217 CC test/env/memory/memory_ut.o 00:04:49.477 CC test/env/pci/pci_ut.o 00:04:49.477 CXX test/cpp_headers/assert.o 00:04:49.477 LINK test_dma 00:04:49.477 LINK vtophys 00:04:49.477 LINK env_dpdk_post_init 00:04:49.477 LINK mem_callbacks 00:04:49.477 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:49.477 CC app/nvmf_tgt/nvmf_main.o 00:04:49.477 LINK verify 00:04:49.477 CXX test/cpp_headers/barrier.o 00:04:49.477 CXX test/cpp_headers/base64.o 00:04:49.477 CXX test/cpp_headers/bdev.o 00:04:49.736 LINK nvmf_tgt 00:04:49.736 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:49.736 CXX test/cpp_headers/bdev_module.o 00:04:49.736 CC app/iscsi_tgt/iscsi_tgt.o 00:04:49.736 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:49.736 CC test/app/histogram_perf/histogram_perf.o 00:04:49.736 LINK pci_ut 00:04:49.736 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:49.996 LINK nvme_fuzz 00:04:49.996 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:49.996 CXX test/cpp_headers/bdev_zone.o 00:04:49.996 LINK iscsi_tgt 00:04:49.996 LINK histogram_perf 00:04:49.996 LINK interrupt_tgt 00:04:49.996 CC test/event/event_perf/event_perf.o 00:04:50.256 CXX test/cpp_headers/bit_array.o 00:04:50.256 LINK event_perf 00:04:50.256 CC test/nvme/aer/aer.o 00:04:50.256 CXX test/cpp_headers/bit_pool.o 00:04:50.256 CC test/accel/dif/dif.o 00:04:50.256 CC app/spdk_tgt/spdk_tgt.o 00:04:50.256 CC test/blobfs/mkfs/mkfs.o 00:04:50.516 LINK vhost_fuzz 00:04:50.516 LINK memory_ut 00:04:50.516 CXX test/cpp_headers/blob_bdev.o 00:04:50.516 CC examples/thread/thread/thread_ex.o 00:04:50.516 CC test/event/reactor/reactor.o 00:04:50.516 LINK mkfs 00:04:50.516 LINK aer 00:04:50.775 LINK spdk_tgt 00:04:50.775 CXX test/cpp_headers/blobfs_bdev.o 00:04:50.775 CXX test/cpp_headers/blobfs.o 00:04:50.775 LINK reactor 00:04:50.775 CC test/event/reactor_perf/reactor_perf.o 00:04:50.775 CXX test/cpp_headers/blob.o 00:04:50.776 LINK thread 00:04:50.776 CC test/nvme/reset/reset.o 00:04:50.776 LINK reactor_perf 00:04:51.050 CC app/spdk_lspci/spdk_lspci.o 00:04:51.050 CC app/spdk_nvme_perf/perf.o 00:04:51.050 CC app/spdk_nvme_identify/identify.o 00:04:51.050 CC app/spdk_nvme_discover/discovery_aer.o 00:04:51.050 CXX test/cpp_headers/conf.o 00:04:51.050 LINK spdk_lspci 00:04:51.050 LINK dif 00:04:51.050 LINK reset 00:04:51.050 CC test/event/app_repeat/app_repeat.o 00:04:51.050 CXX test/cpp_headers/config.o 00:04:51.311 CXX test/cpp_headers/cpuset.o 00:04:51.311 LINK spdk_nvme_discover 00:04:51.311 CXX test/cpp_headers/crc16.o 00:04:51.311 CC examples/sock/hello_world/hello_sock.o 00:04:51.311 CXX test/cpp_headers/crc32.o 00:04:51.311 LINK app_repeat 00:04:51.311 CC test/nvme/sgl/sgl.o 00:04:51.311 CXX test/cpp_headers/crc64.o 00:04:51.311 CC app/spdk_top/spdk_top.o 00:04:51.571 CC test/nvme/e2edp/nvme_dp.o 00:04:51.571 CC test/nvme/overhead/overhead.o 00:04:51.571 LINK hello_sock 00:04:51.571 CXX test/cpp_headers/dif.o 00:04:51.571 CC test/event/scheduler/scheduler.o 00:04:51.571 LINK sgl 00:04:51.571 CXX test/cpp_headers/dma.o 00:04:51.571 LINK nvme_dp 00:04:51.831 LINK iscsi_fuzz 00:04:51.831 LINK overhead 00:04:51.831 LINK spdk_nvme_identify 00:04:51.831 CC examples/vmd/lsvmd/lsvmd.o 00:04:51.831 LINK scheduler 00:04:51.831 CXX test/cpp_headers/endian.o 00:04:51.831 CC examples/vmd/led/led.o 00:04:51.831 LINK spdk_nvme_perf 00:04:52.092 LINK lsvmd 00:04:52.092 CC test/app/jsoncat/jsoncat.o 00:04:52.092 CXX test/cpp_headers/env_dpdk.o 00:04:52.092 CC test/nvme/err_injection/err_injection.o 00:04:52.092 CC test/lvol/esnap/esnap.o 00:04:52.092 LINK led 00:04:52.092 CXX test/cpp_headers/env.o 00:04:52.092 CC app/vhost/vhost.o 00:04:52.092 CC app/spdk_dd/spdk_dd.o 00:04:52.092 LINK jsoncat 00:04:52.352 LINK err_injection 00:04:52.352 CC test/app/stub/stub.o 00:04:52.352 CXX test/cpp_headers/event.o 00:04:52.352 CC test/bdev/bdevio/bdevio.o 00:04:52.352 LINK vhost 00:04:52.352 LINK spdk_top 00:04:52.352 CC examples/idxd/perf/perf.o 00:04:52.352 LINK stub 00:04:52.352 CXX test/cpp_headers/fd_group.o 00:04:52.352 CC test/nvme/startup/startup.o 00:04:52.612 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:52.612 LINK spdk_dd 00:04:52.612 CXX test/cpp_headers/fd.o 00:04:52.612 LINK startup 00:04:52.612 CC examples/accel/perf/accel_perf.o 00:04:52.612 LINK bdevio 00:04:52.612 CC app/fio/nvme/fio_plugin.o 00:04:52.612 LINK idxd_perf 00:04:52.871 CXX test/cpp_headers/file.o 00:04:52.872 LINK hello_fsdev 00:04:52.872 CC examples/blob/hello_world/hello_blob.o 00:04:52.872 CXX test/cpp_headers/fsdev.o 00:04:52.872 CC test/nvme/reserve/reserve.o 00:04:52.872 CC examples/nvme/hello_world/hello_world.o 00:04:52.872 CXX test/cpp_headers/fsdev_module.o 00:04:52.872 CC test/nvme/simple_copy/simple_copy.o 00:04:53.131 LINK hello_blob 00:04:53.131 CC examples/blob/cli/blobcli.o 00:04:53.131 CXX test/cpp_headers/ftl.o 00:04:53.131 CC examples/nvme/reconnect/reconnect.o 00:04:53.131 LINK reserve 00:04:53.131 LINK hello_world 00:04:53.131 LINK simple_copy 00:04:53.131 LINK accel_perf 00:04:53.131 CXX test/cpp_headers/fuse_dispatcher.o 00:04:53.389 CC app/fio/bdev/fio_plugin.o 00:04:53.389 LINK spdk_nvme 00:04:53.389 CC test/nvme/connect_stress/connect_stress.o 00:04:53.389 CXX test/cpp_headers/gpt_spec.o 00:04:53.389 CC test/nvme/boot_partition/boot_partition.o 00:04:53.389 CXX test/cpp_headers/hexlify.o 00:04:53.389 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:53.389 LINK reconnect 00:04:53.648 CC examples/bdev/hello_world/hello_bdev.o 00:04:53.648 LINK boot_partition 00:04:53.648 CXX test/cpp_headers/histogram_data.o 00:04:53.648 LINK connect_stress 00:04:53.648 LINK blobcli 00:04:53.648 CC examples/bdev/bdevperf/bdevperf.o 00:04:53.648 CC test/nvme/compliance/nvme_compliance.o 00:04:53.648 CXX test/cpp_headers/idxd.o 00:04:53.905 CC test/nvme/fused_ordering/fused_ordering.o 00:04:53.905 LINK hello_bdev 00:04:53.905 LINK spdk_bdev 00:04:53.905 CXX test/cpp_headers/idxd_spec.o 00:04:53.905 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:53.905 CXX test/cpp_headers/init.o 00:04:53.905 CXX test/cpp_headers/ioat.o 00:04:53.905 CXX test/cpp_headers/ioat_spec.o 00:04:53.905 LINK nvme_manage 00:04:53.905 LINK fused_ordering 00:04:53.905 CXX test/cpp_headers/iscsi_spec.o 00:04:53.905 CXX test/cpp_headers/json.o 00:04:53.905 LINK doorbell_aers 00:04:53.905 LINK nvme_compliance 00:04:54.164 CXX test/cpp_headers/jsonrpc.o 00:04:54.164 CXX test/cpp_headers/keyring.o 00:04:54.164 CXX test/cpp_headers/keyring_module.o 00:04:54.164 CC examples/nvme/arbitration/arbitration.o 00:04:54.164 CC test/nvme/fdp/fdp.o 00:04:54.164 CC test/nvme/cuse/cuse.o 00:04:54.164 CC examples/nvme/abort/abort.o 00:04:54.164 CC examples/nvme/hotplug/hotplug.o 00:04:54.164 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:54.422 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:54.422 CXX test/cpp_headers/likely.o 00:04:54.422 LINK bdevperf 00:04:54.422 LINK cmb_copy 00:04:54.422 LINK pmr_persistence 00:04:54.422 LINK hotplug 00:04:54.422 CXX test/cpp_headers/log.o 00:04:54.422 LINK arbitration 00:04:54.680 LINK fdp 00:04:54.680 CXX test/cpp_headers/lvol.o 00:04:54.680 CXX test/cpp_headers/md5.o 00:04:54.680 LINK abort 00:04:54.680 CXX test/cpp_headers/memory.o 00:04:54.680 CXX test/cpp_headers/mmio.o 00:04:54.680 CXX test/cpp_headers/nbd.o 00:04:54.680 CXX test/cpp_headers/net.o 00:04:54.680 CXX test/cpp_headers/nvme.o 00:04:54.680 CXX test/cpp_headers/notify.o 00:04:54.680 CXX test/cpp_headers/nvme_intel.o 00:04:54.939 CXX test/cpp_headers/nvme_ocssd.o 00:04:54.939 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:54.939 CXX test/cpp_headers/nvme_spec.o 00:04:54.939 CXX test/cpp_headers/nvme_zns.o 00:04:54.939 CXX test/cpp_headers/nvmf_cmd.o 00:04:54.939 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:54.939 CXX test/cpp_headers/nvmf.o 00:04:54.939 CXX test/cpp_headers/nvmf_spec.o 00:04:54.939 CXX test/cpp_headers/nvmf_transport.o 00:04:54.939 CC examples/nvmf/nvmf/nvmf.o 00:04:54.939 CXX test/cpp_headers/opal.o 00:04:55.200 CXX test/cpp_headers/opal_spec.o 00:04:55.200 CXX test/cpp_headers/pci_ids.o 00:04:55.200 CXX test/cpp_headers/pipe.o 00:04:55.200 CXX test/cpp_headers/queue.o 00:04:55.200 CXX test/cpp_headers/reduce.o 00:04:55.200 CXX test/cpp_headers/rpc.o 00:04:55.200 CXX test/cpp_headers/scheduler.o 00:04:55.200 CXX test/cpp_headers/scsi.o 00:04:55.200 CXX test/cpp_headers/scsi_spec.o 00:04:55.200 CXX test/cpp_headers/sock.o 00:04:55.200 CXX test/cpp_headers/stdinc.o 00:04:55.200 LINK nvmf 00:04:55.200 CXX test/cpp_headers/string.o 00:04:55.200 CXX test/cpp_headers/thread.o 00:04:55.200 CXX test/cpp_headers/trace.o 00:04:55.467 CXX test/cpp_headers/trace_parser.o 00:04:55.467 CXX test/cpp_headers/tree.o 00:04:55.467 CXX test/cpp_headers/ublk.o 00:04:55.467 CXX test/cpp_headers/util.o 00:04:55.467 CXX test/cpp_headers/uuid.o 00:04:55.467 CXX test/cpp_headers/version.o 00:04:55.467 CXX test/cpp_headers/vfio_user_pci.o 00:04:55.467 CXX test/cpp_headers/vfio_user_spec.o 00:04:55.467 CXX test/cpp_headers/vhost.o 00:04:55.467 CXX test/cpp_headers/vmd.o 00:04:55.467 CXX test/cpp_headers/xor.o 00:04:55.467 CXX test/cpp_headers/zipf.o 00:04:55.467 LINK cuse 00:04:58.053 LINK esnap 00:04:58.053 00:04:58.053 real 1m23.101s 00:04:58.053 user 6m15.396s 00:04:58.053 sys 1m17.772s 00:04:58.053 16:30:56 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:58.053 16:30:56 make -- common/autotest_common.sh@10 -- $ set +x 00:04:58.053 ************************************ 00:04:58.053 END TEST make 00:04:58.053 ************************************ 00:04:58.314 16:30:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:58.314 16:30:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:58.314 16:30:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:58.314 16:30:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.314 16:30:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:58.314 16:30:56 -- pm/common@44 -- $ pid=6197 00:04:58.314 16:30:56 -- pm/common@50 -- $ kill -TERM 6197 00:04:58.314 16:30:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.314 16:30:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:58.314 16:30:56 -- pm/common@44 -- $ pid=6199 00:04:58.314 16:30:56 -- pm/common@50 -- $ kill -TERM 6199 00:04:58.314 16:30:57 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:58.314 16:30:57 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:58.314 16:30:57 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:58.314 16:30:57 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:58.314 16:30:57 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.314 16:30:57 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.314 16:30:57 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.314 16:30:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.314 16:30:57 -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.314 16:30:57 -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.314 16:30:57 -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.314 16:30:57 -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.314 16:30:57 -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.314 16:30:57 -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.314 16:30:57 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.314 16:30:57 -- scripts/common.sh@344 -- # case "$op" in 00:04:58.314 16:30:57 -- scripts/common.sh@345 -- # : 1 00:04:58.314 16:30:57 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.314 16:30:57 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.314 16:30:57 -- scripts/common.sh@365 -- # decimal 1 00:04:58.314 16:30:57 -- scripts/common.sh@353 -- # local d=1 00:04:58.314 16:30:57 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.314 16:30:57 -- scripts/common.sh@355 -- # echo 1 00:04:58.314 16:30:57 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.314 16:30:57 -- scripts/common.sh@366 -- # decimal 2 00:04:58.314 16:30:57 -- scripts/common.sh@353 -- # local d=2 00:04:58.314 16:30:57 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.314 16:30:57 -- scripts/common.sh@355 -- # echo 2 00:04:58.314 16:30:57 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.314 16:30:57 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.314 16:30:57 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.314 16:30:57 -- scripts/common.sh@368 -- # return 0 00:04:58.314 16:30:57 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.314 16:30:57 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:58.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.314 --rc genhtml_branch_coverage=1 00:04:58.314 --rc genhtml_function_coverage=1 00:04:58.314 --rc genhtml_legend=1 00:04:58.314 --rc geninfo_all_blocks=1 00:04:58.314 --rc geninfo_unexecuted_blocks=1 00:04:58.314 00:04:58.314 ' 00:04:58.314 16:30:57 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:58.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.314 --rc genhtml_branch_coverage=1 00:04:58.314 --rc genhtml_function_coverage=1 00:04:58.314 --rc genhtml_legend=1 00:04:58.314 --rc geninfo_all_blocks=1 00:04:58.314 --rc geninfo_unexecuted_blocks=1 00:04:58.314 00:04:58.314 ' 00:04:58.314 16:30:57 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:58.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.314 --rc genhtml_branch_coverage=1 00:04:58.314 --rc genhtml_function_coverage=1 00:04:58.314 --rc genhtml_legend=1 00:04:58.314 --rc geninfo_all_blocks=1 00:04:58.314 --rc geninfo_unexecuted_blocks=1 00:04:58.314 00:04:58.314 ' 00:04:58.314 16:30:57 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:58.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.314 --rc genhtml_branch_coverage=1 00:04:58.314 --rc genhtml_function_coverage=1 00:04:58.314 --rc genhtml_legend=1 00:04:58.314 --rc geninfo_all_blocks=1 00:04:58.314 --rc geninfo_unexecuted_blocks=1 00:04:58.314 00:04:58.314 ' 00:04:58.314 16:30:57 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:58.314 16:30:57 -- nvmf/common.sh@7 -- # uname -s 00:04:58.314 16:30:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.314 16:30:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.314 16:30:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.314 16:30:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.314 16:30:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.314 16:30:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.314 16:30:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.314 16:30:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.314 16:30:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.314 16:30:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.314 16:30:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3b9abcfe-3bac-4150-8795-ff18896db5ae 00:04:58.314 16:30:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=3b9abcfe-3bac-4150-8795-ff18896db5ae 00:04:58.314 16:30:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.314 16:30:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.314 16:30:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.314 16:30:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.314 16:30:57 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:58.314 16:30:57 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.314 16:30:57 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.314 16:30:57 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.314 16:30:57 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.314 16:30:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.314 16:30:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.314 16:30:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.314 16:30:57 -- paths/export.sh@5 -- # export PATH 00:04:58.314 16:30:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.314 16:30:57 -- nvmf/common.sh@51 -- # : 0 00:04:58.314 16:30:57 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.314 16:30:57 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.314 16:30:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.314 16:30:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.314 16:30:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.314 16:30:57 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.314 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.314 16:30:57 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.314 16:30:57 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.314 16:30:57 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.314 16:30:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:58.314 16:30:57 -- spdk/autotest.sh@32 -- # uname -s 00:04:58.314 16:30:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:58.314 16:30:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:58.314 16:30:57 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:58.592 16:30:57 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:58.592 16:30:57 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:58.592 16:30:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:58.592 16:30:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:58.592 16:30:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:58.592 16:30:57 -- spdk/autotest.sh@48 -- # udevadm_pid=66959 00:04:58.592 16:30:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:58.592 16:30:57 -- pm/common@17 -- # local monitor 00:04:58.592 16:30:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.592 16:30:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.592 16:30:57 -- pm/common@21 -- # date +%s 00:04:58.592 16:30:57 -- pm/common@21 -- # date +%s 00:04:58.592 16:30:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:58.592 16:30:57 -- pm/common@25 -- # sleep 1 00:04:58.592 16:30:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733589057 00:04:58.592 16:30:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733589057 00:04:58.592 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733589057_collect-cpu-load.pm.log 00:04:58.592 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733589057_collect-vmstat.pm.log 00:04:59.528 16:30:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:59.528 16:30:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:59.528 16:30:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:59.528 16:30:58 -- common/autotest_common.sh@10 -- # set +x 00:04:59.528 16:30:58 -- spdk/autotest.sh@59 -- # create_test_list 00:04:59.528 16:30:58 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:59.528 16:30:58 -- common/autotest_common.sh@10 -- # set +x 00:04:59.528 16:30:58 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:59.528 16:30:58 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:59.528 16:30:58 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:59.529 16:30:58 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:59.529 16:30:58 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:59.529 16:30:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:59.529 16:30:58 -- common/autotest_common.sh@1455 -- # uname 00:04:59.529 16:30:58 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:59.529 16:30:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:59.529 16:30:58 -- common/autotest_common.sh@1475 -- # uname 00:04:59.529 16:30:58 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:59.529 16:30:58 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:59.529 16:30:58 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:59.787 lcov: LCOV version 1.15 00:04:59.787 16:30:58 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:14.685 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:14.685 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:29.603 16:31:26 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:29.603 16:31:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.603 16:31:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.603 16:31:26 -- spdk/autotest.sh@78 -- # rm -f 00:05:29.603 16:31:26 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.603 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:29.603 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:29.603 16:31:27 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:29.603 16:31:27 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:29.603 16:31:27 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:29.603 16:31:27 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:29.603 16:31:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:29.603 16:31:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:29.603 16:31:27 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:29.603 16:31:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:29.603 16:31:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:29.603 16:31:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:29.603 16:31:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:29.603 16:31:27 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:29.603 16:31:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:29.603 16:31:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:29.603 16:31:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:29.603 16:31:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:29.603 16:31:27 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:29.603 16:31:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:29.603 16:31:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:29.603 16:31:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:29.603 16:31:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:29.603 16:31:27 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:29.603 16:31:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:29.603 16:31:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:29.603 16:31:27 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:29.603 16:31:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:29.603 16:31:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:29.603 16:31:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:29.603 16:31:27 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:29.603 16:31:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:29.603 No valid GPT data, bailing 00:05:29.603 16:31:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:29.603 16:31:27 -- scripts/common.sh@394 -- # pt= 00:05:29.603 16:31:27 -- scripts/common.sh@395 -- # return 1 00:05:29.603 16:31:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:29.603 1+0 records in 00:05:29.603 1+0 records out 00:05:29.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474623 s, 221 MB/s 00:05:29.603 16:31:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:29.603 16:31:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:29.603 16:31:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:29.603 16:31:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:29.603 16:31:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:29.603 No valid GPT data, bailing 00:05:29.603 16:31:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:29.603 16:31:27 -- scripts/common.sh@394 -- # pt= 00:05:29.603 16:31:27 -- scripts/common.sh@395 -- # return 1 00:05:29.603 16:31:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:29.603 1+0 records in 00:05:29.603 1+0 records out 00:05:29.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045145 s, 232 MB/s 00:05:29.603 16:31:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:29.603 16:31:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:29.603 16:31:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:29.603 16:31:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:29.603 16:31:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:29.603 No valid GPT data, bailing 00:05:29.603 16:31:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:29.603 16:31:27 -- scripts/common.sh@394 -- # pt= 00:05:29.603 16:31:27 -- scripts/common.sh@395 -- # return 1 00:05:29.603 16:31:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:29.603 1+0 records in 00:05:29.603 1+0 records out 00:05:29.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00604779 s, 173 MB/s 00:05:29.603 16:31:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:29.603 16:31:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:29.603 16:31:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:29.603 16:31:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:29.603 16:31:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:29.603 No valid GPT data, bailing 00:05:29.603 16:31:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:29.603 16:31:27 -- scripts/common.sh@394 -- # pt= 00:05:29.603 16:31:27 -- scripts/common.sh@395 -- # return 1 00:05:29.603 16:31:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:29.603 1+0 records in 00:05:29.603 1+0 records out 00:05:29.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0066676 s, 157 MB/s 00:05:29.603 16:31:27 -- spdk/autotest.sh@105 -- # sync 00:05:29.603 16:31:27 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:29.603 16:31:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:29.603 16:31:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:32.137 16:31:30 -- spdk/autotest.sh@111 -- # uname -s 00:05:32.137 16:31:30 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:32.137 16:31:30 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:32.137 16:31:30 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:32.706 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.706 Hugepages 00:05:32.706 node hugesize free / total 00:05:32.706 node0 1048576kB 0 / 0 00:05:32.706 node0 2048kB 0 / 0 00:05:32.706 00:05:32.706 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:32.706 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:32.967 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:32.967 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:32.967 16:31:31 -- spdk/autotest.sh@117 -- # uname -s 00:05:32.967 16:31:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:32.967 16:31:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:32.967 16:31:31 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.162 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.162 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.162 16:31:32 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:35.093 16:31:33 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:35.093 16:31:33 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:35.093 16:31:33 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:35.093 16:31:33 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:35.093 16:31:33 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:35.093 16:31:33 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:35.093 16:31:33 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.093 16:31:33 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:35.093 16:31:33 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:35.351 16:31:34 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:35.351 16:31:34 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:35.351 16:31:34 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.922 Waiting for block devices as requested 00:05:35.922 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:35.922 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.180 16:31:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:36.180 16:31:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:36.180 16:31:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:36.180 16:31:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:36.180 16:31:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:36.181 16:31:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:36.181 16:31:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:36.181 16:31:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:36.181 16:31:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:36.181 16:31:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:36.181 16:31:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:36.181 16:31:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:36.181 16:31:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:36.181 16:31:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:36.181 16:31:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:36.181 16:31:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:36.181 16:31:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:36.181 16:31:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:36.181 16:31:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:36.181 16:31:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:36.181 16:31:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:36.181 16:31:34 -- common/autotest_common.sh@1541 -- # continue 00:05:36.181 16:31:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:36.181 16:31:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:36.181 16:31:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:36.181 16:31:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:36.181 16:31:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:36.181 16:31:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:36.181 16:31:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:36.181 16:31:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:36.181 16:31:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:36.181 16:31:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:36.181 16:31:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:36.181 16:31:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:36.181 16:31:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:36.181 16:31:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:36.181 16:31:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:36.181 16:31:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:36.181 16:31:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:36.181 16:31:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:36.181 16:31:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:36.181 16:31:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:36.181 16:31:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:36.181 16:31:34 -- common/autotest_common.sh@1541 -- # continue 00:05:36.181 16:31:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:36.181 16:31:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.181 16:31:34 -- common/autotest_common.sh@10 -- # set +x 00:05:36.181 16:31:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:36.181 16:31:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.181 16:31:34 -- common/autotest_common.sh@10 -- # set +x 00:05:36.181 16:31:34 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.122 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.122 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.379 16:31:36 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:37.379 16:31:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:37.379 16:31:36 -- common/autotest_common.sh@10 -- # set +x 00:05:37.379 16:31:36 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:37.379 16:31:36 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:37.379 16:31:36 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:37.379 16:31:36 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:37.379 16:31:36 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:37.379 16:31:36 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:37.379 16:31:36 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:37.379 16:31:36 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:37.379 16:31:36 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:37.379 16:31:36 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:37.379 16:31:36 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:37.379 16:31:36 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:37.379 16:31:36 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:37.379 16:31:36 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:37.379 16:31:36 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:37.379 16:31:36 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:37.379 16:31:36 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:37.379 16:31:36 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:37.379 16:31:36 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:37.379 16:31:36 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:37.379 16:31:36 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:37.379 16:31:36 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:37.379 16:31:36 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:37.379 16:31:36 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:37.379 16:31:36 -- common/autotest_common.sh@1570 -- # return 0 00:05:37.379 16:31:36 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:37.379 16:31:36 -- common/autotest_common.sh@1578 -- # return 0 00:05:37.379 16:31:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:37.379 16:31:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:37.379 16:31:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:37.379 16:31:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:37.379 16:31:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:37.379 16:31:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.379 16:31:36 -- common/autotest_common.sh@10 -- # set +x 00:05:37.379 16:31:36 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:37.379 16:31:36 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:37.379 16:31:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.379 16:31:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.379 16:31:36 -- common/autotest_common.sh@10 -- # set +x 00:05:37.380 ************************************ 00:05:37.380 START TEST env 00:05:37.380 ************************************ 00:05:37.380 16:31:36 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:37.637 * Looking for test storage... 00:05:37.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:37.637 16:31:36 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:37.637 16:31:36 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:37.637 16:31:36 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:37.637 16:31:36 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:37.637 16:31:36 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.637 16:31:36 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.637 16:31:36 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.637 16:31:36 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.637 16:31:36 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.637 16:31:36 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.637 16:31:36 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.637 16:31:36 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.637 16:31:36 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.637 16:31:36 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.637 16:31:36 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.637 16:31:36 env -- scripts/common.sh@344 -- # case "$op" in 00:05:37.637 16:31:36 env -- scripts/common.sh@345 -- # : 1 00:05:37.637 16:31:36 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.637 16:31:36 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.637 16:31:36 env -- scripts/common.sh@365 -- # decimal 1 00:05:37.637 16:31:36 env -- scripts/common.sh@353 -- # local d=1 00:05:37.637 16:31:36 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.637 16:31:36 env -- scripts/common.sh@355 -- # echo 1 00:05:37.637 16:31:36 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.637 16:31:36 env -- scripts/common.sh@366 -- # decimal 2 00:05:37.637 16:31:36 env -- scripts/common.sh@353 -- # local d=2 00:05:37.637 16:31:36 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.637 16:31:36 env -- scripts/common.sh@355 -- # echo 2 00:05:37.637 16:31:36 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.637 16:31:36 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.637 16:31:36 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.637 16:31:36 env -- scripts/common.sh@368 -- # return 0 00:05:37.637 16:31:36 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.637 16:31:36 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:37.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.637 --rc genhtml_branch_coverage=1 00:05:37.637 --rc genhtml_function_coverage=1 00:05:37.637 --rc genhtml_legend=1 00:05:37.637 --rc geninfo_all_blocks=1 00:05:37.637 --rc geninfo_unexecuted_blocks=1 00:05:37.637 00:05:37.637 ' 00:05:37.637 16:31:36 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:37.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.637 --rc genhtml_branch_coverage=1 00:05:37.637 --rc genhtml_function_coverage=1 00:05:37.637 --rc genhtml_legend=1 00:05:37.637 --rc geninfo_all_blocks=1 00:05:37.637 --rc geninfo_unexecuted_blocks=1 00:05:37.637 00:05:37.637 ' 00:05:37.637 16:31:36 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:37.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.637 --rc genhtml_branch_coverage=1 00:05:37.637 --rc genhtml_function_coverage=1 00:05:37.637 --rc genhtml_legend=1 00:05:37.637 --rc geninfo_all_blocks=1 00:05:37.637 --rc geninfo_unexecuted_blocks=1 00:05:37.637 00:05:37.637 ' 00:05:37.637 16:31:36 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:37.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.638 --rc genhtml_branch_coverage=1 00:05:37.638 --rc genhtml_function_coverage=1 00:05:37.638 --rc genhtml_legend=1 00:05:37.638 --rc geninfo_all_blocks=1 00:05:37.638 --rc geninfo_unexecuted_blocks=1 00:05:37.638 00:05:37.638 ' 00:05:37.638 16:31:36 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:37.638 16:31:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.638 16:31:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.638 16:31:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.638 ************************************ 00:05:37.638 START TEST env_memory 00:05:37.638 ************************************ 00:05:37.638 16:31:36 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:37.638 00:05:37.638 00:05:37.638 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.638 http://cunit.sourceforge.net/ 00:05:37.638 00:05:37.638 00:05:37.638 Suite: memory 00:05:37.896 Test: alloc and free memory map ...[2024-12-07 16:31:36.548876] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:37.896 passed 00:05:37.896 Test: mem map translation ...[2024-12-07 16:31:36.593591] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:37.896 [2024-12-07 16:31:36.593666] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:37.896 [2024-12-07 16:31:36.593730] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:37.896 [2024-12-07 16:31:36.593751] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:37.896 passed 00:05:37.896 Test: mem map registration ...[2024-12-07 16:31:36.659456] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:37.896 [2024-12-07 16:31:36.659551] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:37.896 passed 00:05:37.896 Test: mem map adjacent registrations ...passed 00:05:37.896 00:05:37.896 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.896 suites 1 1 n/a 0 0 00:05:37.896 tests 4 4 4 0 0 00:05:37.896 asserts 152 152 152 0 n/a 00:05:37.896 00:05:37.896 Elapsed time = 0.240 seconds 00:05:37.896 00:05:37.896 real 0m0.287s 00:05:37.896 user 0m0.257s 00:05:37.896 sys 0m0.020s 00:05:37.896 16:31:36 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.896 16:31:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:37.896 ************************************ 00:05:37.896 END TEST env_memory 00:05:37.896 ************************************ 00:05:38.155 16:31:36 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:38.155 16:31:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.155 16:31:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.155 16:31:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.155 ************************************ 00:05:38.155 START TEST env_vtophys 00:05:38.155 ************************************ 00:05:38.155 16:31:36 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:38.155 EAL: lib.eal log level changed from notice to debug 00:05:38.155 EAL: Detected lcore 0 as core 0 on socket 0 00:05:38.155 EAL: Detected lcore 1 as core 0 on socket 0 00:05:38.155 EAL: Detected lcore 2 as core 0 on socket 0 00:05:38.155 EAL: Detected lcore 3 as core 0 on socket 0 00:05:38.155 EAL: Detected lcore 4 as core 0 on socket 0 00:05:38.155 EAL: Detected lcore 5 as core 0 on socket 0 00:05:38.155 EAL: Detected lcore 6 as core 0 on socket 0 00:05:38.155 EAL: Detected lcore 7 as core 0 on socket 0 00:05:38.155 EAL: Detected lcore 8 as core 0 on socket 0 00:05:38.155 EAL: Detected lcore 9 as core 0 on socket 0 00:05:38.155 EAL: Maximum logical cores by configuration: 128 00:05:38.155 EAL: Detected CPU lcores: 10 00:05:38.155 EAL: Detected NUMA nodes: 1 00:05:38.155 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:38.155 EAL: Detected shared linkage of DPDK 00:05:38.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:38.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:38.155 EAL: Registered [vdev] bus. 00:05:38.155 EAL: bus.vdev log level changed from disabled to notice 00:05:38.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:38.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:38.155 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:38.155 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:38.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:38.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:38.156 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:38.156 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:38.156 EAL: No shared files mode enabled, IPC will be disabled 00:05:38.156 EAL: No shared files mode enabled, IPC is disabled 00:05:38.156 EAL: Selected IOVA mode 'PA' 00:05:38.156 EAL: Probing VFIO support... 00:05:38.156 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:38.156 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:38.156 EAL: Ask a virtual area of 0x2e000 bytes 00:05:38.156 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:38.156 EAL: Setting up physically contiguous memory... 00:05:38.156 EAL: Setting maximum number of open files to 524288 00:05:38.156 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:38.156 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:38.156 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.156 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:38.156 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.156 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.156 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:38.156 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:38.156 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.156 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:38.156 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.156 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.156 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:38.156 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:38.156 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.156 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:38.156 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.156 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.156 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:38.156 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:38.156 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.156 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:38.156 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.156 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.156 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:38.156 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:38.156 EAL: Hugepages will be freed exactly as allocated. 00:05:38.156 EAL: No shared files mode enabled, IPC is disabled 00:05:38.156 EAL: No shared files mode enabled, IPC is disabled 00:05:38.156 EAL: TSC frequency is ~2290000 KHz 00:05:38.156 EAL: Main lcore 0 is ready (tid=7f0ebd276a40;cpuset=[0]) 00:05:38.156 EAL: Trying to obtain current memory policy. 00:05:38.156 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.156 EAL: Restoring previous memory policy: 0 00:05:38.156 EAL: request: mp_malloc_sync 00:05:38.156 EAL: No shared files mode enabled, IPC is disabled 00:05:38.156 EAL: Heap on socket 0 was expanded by 2MB 00:05:38.156 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:38.156 EAL: No shared files mode enabled, IPC is disabled 00:05:38.156 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:38.156 EAL: Mem event callback 'spdk:(nil)' registered 00:05:38.156 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:38.156 00:05:38.156 00:05:38.156 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.156 http://cunit.sourceforge.net/ 00:05:38.156 00:05:38.156 00:05:38.156 Suite: components_suite 00:05:38.723 Test: vtophys_malloc_test ...passed 00:05:38.723 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:38.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.723 EAL: Restoring previous memory policy: 4 00:05:38.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.723 EAL: request: mp_malloc_sync 00:05:38.723 EAL: No shared files mode enabled, IPC is disabled 00:05:38.723 EAL: Heap on socket 0 was expanded by 4MB 00:05:38.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.723 EAL: request: mp_malloc_sync 00:05:38.723 EAL: No shared files mode enabled, IPC is disabled 00:05:38.723 EAL: Heap on socket 0 was shrunk by 4MB 00:05:38.723 EAL: Trying to obtain current memory policy. 00:05:38.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.723 EAL: Restoring previous memory policy: 4 00:05:38.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.723 EAL: request: mp_malloc_sync 00:05:38.723 EAL: No shared files mode enabled, IPC is disabled 00:05:38.723 EAL: Heap on socket 0 was expanded by 6MB 00:05:38.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.723 EAL: request: mp_malloc_sync 00:05:38.723 EAL: No shared files mode enabled, IPC is disabled 00:05:38.723 EAL: Heap on socket 0 was shrunk by 6MB 00:05:38.723 EAL: Trying to obtain current memory policy. 00:05:38.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.723 EAL: Restoring previous memory policy: 4 00:05:38.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.723 EAL: request: mp_malloc_sync 00:05:38.723 EAL: No shared files mode enabled, IPC is disabled 00:05:38.723 EAL: Heap on socket 0 was expanded by 10MB 00:05:38.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.723 EAL: request: mp_malloc_sync 00:05:38.723 EAL: No shared files mode enabled, IPC is disabled 00:05:38.723 EAL: Heap on socket 0 was shrunk by 10MB 00:05:38.723 EAL: Trying to obtain current memory policy. 00:05:38.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.723 EAL: Restoring previous memory policy: 4 00:05:38.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.723 EAL: request: mp_malloc_sync 00:05:38.723 EAL: No shared files mode enabled, IPC is disabled 00:05:38.723 EAL: Heap on socket 0 was expanded by 18MB 00:05:38.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.723 EAL: request: mp_malloc_sync 00:05:38.723 EAL: No shared files mode enabled, IPC is disabled 00:05:38.723 EAL: Heap on socket 0 was shrunk by 18MB 00:05:38.723 EAL: Trying to obtain current memory policy. 00:05:38.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.723 EAL: Restoring previous memory policy: 4 00:05:38.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.723 EAL: request: mp_malloc_sync 00:05:38.723 EAL: No shared files mode enabled, IPC is disabled 00:05:38.723 EAL: Heap on socket 0 was expanded by 34MB 00:05:38.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.723 EAL: request: mp_malloc_sync 00:05:38.724 EAL: No shared files mode enabled, IPC is disabled 00:05:38.724 EAL: Heap on socket 0 was shrunk by 34MB 00:05:38.724 EAL: Trying to obtain current memory policy. 00:05:38.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.724 EAL: Restoring previous memory policy: 4 00:05:38.724 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.724 EAL: request: mp_malloc_sync 00:05:38.724 EAL: No shared files mode enabled, IPC is disabled 00:05:38.724 EAL: Heap on socket 0 was expanded by 66MB 00:05:38.724 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.724 EAL: request: mp_malloc_sync 00:05:38.724 EAL: No shared files mode enabled, IPC is disabled 00:05:38.724 EAL: Heap on socket 0 was shrunk by 66MB 00:05:38.724 EAL: Trying to obtain current memory policy. 00:05:38.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.724 EAL: Restoring previous memory policy: 4 00:05:38.724 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.724 EAL: request: mp_malloc_sync 00:05:38.724 EAL: No shared files mode enabled, IPC is disabled 00:05:38.724 EAL: Heap on socket 0 was expanded by 130MB 00:05:38.724 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.724 EAL: request: mp_malloc_sync 00:05:38.724 EAL: No shared files mode enabled, IPC is disabled 00:05:38.724 EAL: Heap on socket 0 was shrunk by 130MB 00:05:38.724 EAL: Trying to obtain current memory policy. 00:05:38.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.724 EAL: Restoring previous memory policy: 4 00:05:38.724 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.724 EAL: request: mp_malloc_sync 00:05:38.724 EAL: No shared files mode enabled, IPC is disabled 00:05:38.724 EAL: Heap on socket 0 was expanded by 258MB 00:05:38.724 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.724 EAL: request: mp_malloc_sync 00:05:38.724 EAL: No shared files mode enabled, IPC is disabled 00:05:38.724 EAL: Heap on socket 0 was shrunk by 258MB 00:05:38.724 EAL: Trying to obtain current memory policy. 00:05:38.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.983 EAL: Restoring previous memory policy: 4 00:05:38.983 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.983 EAL: request: mp_malloc_sync 00:05:38.983 EAL: No shared files mode enabled, IPC is disabled 00:05:38.983 EAL: Heap on socket 0 was expanded by 514MB 00:05:38.983 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.983 EAL: request: mp_malloc_sync 00:05:38.983 EAL: No shared files mode enabled, IPC is disabled 00:05:38.983 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.983 EAL: Trying to obtain current memory policy. 00:05:38.983 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.241 EAL: Restoring previous memory policy: 4 00:05:39.241 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.241 EAL: request: mp_malloc_sync 00:05:39.241 EAL: No shared files mode enabled, IPC is disabled 00:05:39.241 EAL: Heap on socket 0 was expanded by 1026MB 00:05:39.500 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.759 passed 00:05:39.759 00:05:39.759 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.759 suites 1 1 n/a 0 0 00:05:39.759 tests 2 2 2 0 0 00:05:39.759 asserts 5323 5323 5323 0 n/a 00:05:39.759 00:05:39.759 Elapsed time = 1.348 seconds 00:05:39.759 EAL: request: mp_malloc_sync 00:05:39.759 EAL: No shared files mode enabled, IPC is disabled 00:05:39.759 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:39.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.759 EAL: request: mp_malloc_sync 00:05:39.759 EAL: No shared files mode enabled, IPC is disabled 00:05:39.759 EAL: Heap on socket 0 was shrunk by 2MB 00:05:39.759 EAL: No shared files mode enabled, IPC is disabled 00:05:39.759 EAL: No shared files mode enabled, IPC is disabled 00:05:39.759 EAL: No shared files mode enabled, IPC is disabled 00:05:39.759 00:05:39.759 real 0m1.603s 00:05:39.759 user 0m0.730s 00:05:39.759 sys 0m0.739s 00:05:39.759 ************************************ 00:05:39.759 END TEST env_vtophys 00:05:39.759 ************************************ 00:05:39.759 16:31:38 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.759 16:31:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:39.759 16:31:38 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.759 16:31:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.759 16:31:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.759 16:31:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.759 ************************************ 00:05:39.759 START TEST env_pci 00:05:39.759 ************************************ 00:05:39.759 16:31:38 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.759 00:05:39.759 00:05:39.759 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.759 http://cunit.sourceforge.net/ 00:05:39.759 00:05:39.759 00:05:39.759 Suite: pci 00:05:39.759 Test: pci_hook ...[2024-12-07 16:31:38.535801] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69197 has claimed it 00:05:39.759 passed 00:05:39.759 00:05:39.759 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.759 suites 1 1 n/a 0 0 00:05:39.759 tests 1 1 1 0 0 00:05:39.759 asserts 25 25 25 0 n/a 00:05:39.759 00:05:39.759 Elapsed time = 0.006 seconds 00:05:39.759 EAL: Cannot find device (10000:00:01.0) 00:05:39.759 EAL: Failed to attach device on primary process 00:05:39.759 00:05:39.759 real 0m0.093s 00:05:39.759 user 0m0.039s 00:05:39.759 sys 0m0.053s 00:05:39.759 ************************************ 00:05:39.759 END TEST env_pci 00:05:39.759 ************************************ 00:05:39.759 16:31:38 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.759 16:31:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:39.759 16:31:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:39.759 16:31:38 env -- env/env.sh@15 -- # uname 00:05:40.039 16:31:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:40.039 16:31:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:40.039 16:31:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.039 16:31:38 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:40.039 16:31:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.039 16:31:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.039 ************************************ 00:05:40.039 START TEST env_dpdk_post_init 00:05:40.039 ************************************ 00:05:40.039 16:31:38 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.039 EAL: Detected CPU lcores: 10 00:05:40.039 EAL: Detected NUMA nodes: 1 00:05:40.039 EAL: Detected shared linkage of DPDK 00:05:40.039 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.039 EAL: Selected IOVA mode 'PA' 00:05:40.039 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.039 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:40.039 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:40.039 Starting DPDK initialization... 00:05:40.039 Starting SPDK post initialization... 00:05:40.040 SPDK NVMe probe 00:05:40.040 Attaching to 0000:00:10.0 00:05:40.040 Attaching to 0000:00:11.0 00:05:40.040 Attached to 0000:00:10.0 00:05:40.040 Attached to 0000:00:11.0 00:05:40.040 Cleaning up... 00:05:40.040 00:05:40.040 real 0m0.256s 00:05:40.040 user 0m0.075s 00:05:40.040 sys 0m0.081s 00:05:40.040 16:31:38 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.040 ************************************ 00:05:40.040 END TEST env_dpdk_post_init 00:05:40.040 ************************************ 00:05:40.040 16:31:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.299 16:31:38 env -- env/env.sh@26 -- # uname 00:05:40.299 16:31:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:40.299 16:31:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.299 16:31:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.299 16:31:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.299 16:31:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.299 ************************************ 00:05:40.299 START TEST env_mem_callbacks 00:05:40.299 ************************************ 00:05:40.299 16:31:39 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.299 EAL: Detected CPU lcores: 10 00:05:40.299 EAL: Detected NUMA nodes: 1 00:05:40.299 EAL: Detected shared linkage of DPDK 00:05:40.299 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.299 EAL: Selected IOVA mode 'PA' 00:05:40.299 00:05:40.299 00:05:40.299 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.299 http://cunit.sourceforge.net/ 00:05:40.299 00:05:40.299 00:05:40.299 Suite: memory 00:05:40.299 Test: test ... 00:05:40.299 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.299 register 0x200000200000 2097152 00:05:40.299 malloc 3145728 00:05:40.299 register 0x200000400000 4194304 00:05:40.299 buf 0x200000500000 len 3145728 PASSED 00:05:40.299 malloc 64 00:05:40.299 buf 0x2000004fff40 len 64 PASSED 00:05:40.299 malloc 4194304 00:05:40.299 register 0x200000800000 6291456 00:05:40.299 buf 0x200000a00000 len 4194304 PASSED 00:05:40.299 free 0x200000500000 3145728 00:05:40.299 free 0x2000004fff40 64 00:05:40.299 unregister 0x200000400000 4194304 PASSED 00:05:40.299 free 0x200000a00000 4194304 00:05:40.299 unregister 0x200000800000 6291456 PASSED 00:05:40.299 malloc 8388608 00:05:40.299 register 0x200000400000 10485760 00:05:40.299 buf 0x200000600000 len 8388608 PASSED 00:05:40.299 free 0x200000600000 8388608 00:05:40.299 unregister 0x200000400000 10485760 PASSED 00:05:40.299 passed 00:05:40.299 00:05:40.299 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.299 suites 1 1 n/a 0 0 00:05:40.299 tests 1 1 1 0 0 00:05:40.299 asserts 15 15 15 0 n/a 00:05:40.299 00:05:40.299 Elapsed time = 0.011 seconds 00:05:40.560 00:05:40.560 real 0m0.203s 00:05:40.560 user 0m0.040s 00:05:40.560 sys 0m0.060s 00:05:40.560 16:31:39 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.560 16:31:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:40.560 ************************************ 00:05:40.560 END TEST env_mem_callbacks 00:05:40.560 ************************************ 00:05:40.560 00:05:40.560 real 0m3.016s 00:05:40.560 user 0m1.373s 00:05:40.560 sys 0m1.310s 00:05:40.560 ************************************ 00:05:40.560 END TEST env 00:05:40.560 ************************************ 00:05:40.560 16:31:39 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.560 16:31:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.560 16:31:39 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:40.560 16:31:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.560 16:31:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.560 16:31:39 -- common/autotest_common.sh@10 -- # set +x 00:05:40.560 ************************************ 00:05:40.560 START TEST rpc 00:05:40.560 ************************************ 00:05:40.560 16:31:39 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:40.560 * Looking for test storage... 00:05:40.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.560 16:31:39 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.560 16:31:39 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.560 16:31:39 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.821 16:31:39 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.821 16:31:39 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.821 16:31:39 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.821 16:31:39 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.821 16:31:39 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.821 16:31:39 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.821 16:31:39 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.821 16:31:39 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.821 16:31:39 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.821 16:31:39 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.821 16:31:39 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.821 16:31:39 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.821 16:31:39 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.821 16:31:39 rpc -- scripts/common.sh@345 -- # : 1 00:05:40.821 16:31:39 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.821 16:31:39 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.821 16:31:39 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.821 16:31:39 rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.821 16:31:39 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.821 16:31:39 rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.821 16:31:39 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.821 16:31:39 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.821 16:31:39 rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.821 16:31:39 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.821 16:31:39 rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.821 16:31:39 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.821 16:31:39 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.821 16:31:39 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.821 16:31:39 rpc -- scripts/common.sh@368 -- # return 0 00:05:40.821 16:31:39 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.821 16:31:39 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.821 --rc genhtml_branch_coverage=1 00:05:40.821 --rc genhtml_function_coverage=1 00:05:40.821 --rc genhtml_legend=1 00:05:40.821 --rc geninfo_all_blocks=1 00:05:40.821 --rc geninfo_unexecuted_blocks=1 00:05:40.821 00:05:40.821 ' 00:05:40.821 16:31:39 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.821 --rc genhtml_branch_coverage=1 00:05:40.821 --rc genhtml_function_coverage=1 00:05:40.821 --rc genhtml_legend=1 00:05:40.821 --rc geninfo_all_blocks=1 00:05:40.821 --rc geninfo_unexecuted_blocks=1 00:05:40.821 00:05:40.821 ' 00:05:40.821 16:31:39 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.821 --rc genhtml_branch_coverage=1 00:05:40.821 --rc genhtml_function_coverage=1 00:05:40.821 --rc genhtml_legend=1 00:05:40.821 --rc geninfo_all_blocks=1 00:05:40.821 --rc geninfo_unexecuted_blocks=1 00:05:40.821 00:05:40.821 ' 00:05:40.821 16:31:39 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.821 --rc genhtml_branch_coverage=1 00:05:40.821 --rc genhtml_function_coverage=1 00:05:40.821 --rc genhtml_legend=1 00:05:40.821 --rc geninfo_all_blocks=1 00:05:40.821 --rc geninfo_unexecuted_blocks=1 00:05:40.821 00:05:40.821 ' 00:05:40.821 16:31:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69325 00:05:40.821 16:31:39 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:40.821 16:31:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.821 16:31:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69325 00:05:40.821 16:31:39 rpc -- common/autotest_common.sh@831 -- # '[' -z 69325 ']' 00:05:40.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.821 16:31:39 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.821 16:31:39 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.821 16:31:39 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.821 16:31:39 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.821 16:31:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.821 [2024-12-07 16:31:39.658587] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:40.821 [2024-12-07 16:31:39.658724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69325 ] 00:05:41.081 [2024-12-07 16:31:39.818459] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.081 [2024-12-07 16:31:39.866209] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.081 [2024-12-07 16:31:39.866272] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69325' to capture a snapshot of events at runtime. 00:05:41.081 [2024-12-07 16:31:39.866285] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.081 [2024-12-07 16:31:39.866294] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.081 [2024-12-07 16:31:39.866306] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69325 for offline analysis/debug. 00:05:41.081 [2024-12-07 16:31:39.866359] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.651 16:31:40 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.651 16:31:40 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:41.651 16:31:40 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.651 16:31:40 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.651 16:31:40 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.651 16:31:40 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.651 16:31:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.651 16:31:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.651 16:31:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.651 ************************************ 00:05:41.651 START TEST rpc_integrity 00:05:41.651 ************************************ 00:05:41.651 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:41.651 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.651 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.651 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.651 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.651 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.651 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.913 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.913 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.913 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.913 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.913 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.913 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:41.913 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.913 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.913 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.913 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.913 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.913 { 00:05:41.913 "name": "Malloc0", 00:05:41.913 "aliases": [ 00:05:41.913 "0c47849d-79c7-4165-8407-7754b73f865f" 00:05:41.913 ], 00:05:41.913 "product_name": "Malloc disk", 00:05:41.913 "block_size": 512, 00:05:41.913 "num_blocks": 16384, 00:05:41.913 "uuid": "0c47849d-79c7-4165-8407-7754b73f865f", 00:05:41.913 "assigned_rate_limits": { 00:05:41.913 "rw_ios_per_sec": 0, 00:05:41.913 "rw_mbytes_per_sec": 0, 00:05:41.913 "r_mbytes_per_sec": 0, 00:05:41.913 "w_mbytes_per_sec": 0 00:05:41.913 }, 00:05:41.913 "claimed": false, 00:05:41.913 "zoned": false, 00:05:41.913 "supported_io_types": { 00:05:41.913 "read": true, 00:05:41.913 "write": true, 00:05:41.913 "unmap": true, 00:05:41.913 "flush": true, 00:05:41.913 "reset": true, 00:05:41.913 "nvme_admin": false, 00:05:41.913 "nvme_io": false, 00:05:41.913 "nvme_io_md": false, 00:05:41.913 "write_zeroes": true, 00:05:41.913 "zcopy": true, 00:05:41.913 "get_zone_info": false, 00:05:41.913 "zone_management": false, 00:05:41.913 "zone_append": false, 00:05:41.913 "compare": false, 00:05:41.913 "compare_and_write": false, 00:05:41.913 "abort": true, 00:05:41.913 "seek_hole": false, 00:05:41.913 "seek_data": false, 00:05:41.913 "copy": true, 00:05:41.913 "nvme_iov_md": false 00:05:41.913 }, 00:05:41.913 "memory_domains": [ 00:05:41.913 { 00:05:41.913 "dma_device_id": "system", 00:05:41.913 "dma_device_type": 1 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.913 "dma_device_type": 2 00:05:41.913 } 00:05:41.913 ], 00:05:41.913 "driver_specific": {} 00:05:41.913 } 00:05:41.913 ]' 00:05:41.913 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.913 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.913 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:41.913 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.913 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.913 [2024-12-07 16:31:40.673195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:41.913 [2024-12-07 16:31:40.673395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.913 [2024-12-07 16:31:40.673462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:41.913 [2024-12-07 16:31:40.673474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.913 [2024-12-07 16:31:40.676131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.913 [2024-12-07 16:31:40.676176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.913 Passthru0 00:05:41.913 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.913 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.913 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.913 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.913 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.913 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.913 { 00:05:41.913 "name": "Malloc0", 00:05:41.913 "aliases": [ 00:05:41.913 "0c47849d-79c7-4165-8407-7754b73f865f" 00:05:41.913 ], 00:05:41.913 "product_name": "Malloc disk", 00:05:41.913 "block_size": 512, 00:05:41.913 "num_blocks": 16384, 00:05:41.913 "uuid": "0c47849d-79c7-4165-8407-7754b73f865f", 00:05:41.913 "assigned_rate_limits": { 00:05:41.913 "rw_ios_per_sec": 0, 00:05:41.913 "rw_mbytes_per_sec": 0, 00:05:41.913 "r_mbytes_per_sec": 0, 00:05:41.913 "w_mbytes_per_sec": 0 00:05:41.913 }, 00:05:41.913 "claimed": true, 00:05:41.913 "claim_type": "exclusive_write", 00:05:41.913 "zoned": false, 00:05:41.913 "supported_io_types": { 00:05:41.913 "read": true, 00:05:41.913 "write": true, 00:05:41.913 "unmap": true, 00:05:41.913 "flush": true, 00:05:41.913 "reset": true, 00:05:41.913 "nvme_admin": false, 00:05:41.913 "nvme_io": false, 00:05:41.913 "nvme_io_md": false, 00:05:41.913 "write_zeroes": true, 00:05:41.913 "zcopy": true, 00:05:41.913 "get_zone_info": false, 00:05:41.913 "zone_management": false, 00:05:41.913 "zone_append": false, 00:05:41.913 "compare": false, 00:05:41.913 "compare_and_write": false, 00:05:41.913 "abort": true, 00:05:41.913 "seek_hole": false, 00:05:41.913 "seek_data": false, 00:05:41.913 "copy": true, 00:05:41.913 "nvme_iov_md": false 00:05:41.913 }, 00:05:41.913 "memory_domains": [ 00:05:41.913 { 00:05:41.913 "dma_device_id": "system", 00:05:41.913 "dma_device_type": 1 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.913 "dma_device_type": 2 00:05:41.913 } 00:05:41.913 ], 00:05:41.913 "driver_specific": {} 00:05:41.913 }, 00:05:41.913 { 00:05:41.913 "name": "Passthru0", 00:05:41.913 "aliases": [ 00:05:41.913 "a48af4bd-ac40-5eab-87f6-e7401c8fd4e9" 00:05:41.913 ], 00:05:41.913 "product_name": "passthru", 00:05:41.913 "block_size": 512, 00:05:41.913 "num_blocks": 16384, 00:05:41.913 "uuid": "a48af4bd-ac40-5eab-87f6-e7401c8fd4e9", 00:05:41.913 "assigned_rate_limits": { 00:05:41.913 "rw_ios_per_sec": 0, 00:05:41.913 "rw_mbytes_per_sec": 0, 00:05:41.913 "r_mbytes_per_sec": 0, 00:05:41.913 "w_mbytes_per_sec": 0 00:05:41.913 }, 00:05:41.913 "claimed": false, 00:05:41.913 "zoned": false, 00:05:41.913 "supported_io_types": { 00:05:41.913 "read": true, 00:05:41.913 "write": true, 00:05:41.913 "unmap": true, 00:05:41.913 "flush": true, 00:05:41.913 "reset": true, 00:05:41.913 "nvme_admin": false, 00:05:41.913 "nvme_io": false, 00:05:41.913 "nvme_io_md": false, 00:05:41.914 "write_zeroes": true, 00:05:41.914 "zcopy": true, 00:05:41.914 "get_zone_info": false, 00:05:41.914 "zone_management": false, 00:05:41.914 "zone_append": false, 00:05:41.914 "compare": false, 00:05:41.914 "compare_and_write": false, 00:05:41.914 "abort": true, 00:05:41.914 "seek_hole": false, 00:05:41.914 "seek_data": false, 00:05:41.914 "copy": true, 00:05:41.914 "nvme_iov_md": false 00:05:41.914 }, 00:05:41.914 "memory_domains": [ 00:05:41.914 { 00:05:41.914 "dma_device_id": "system", 00:05:41.914 "dma_device_type": 1 00:05:41.914 }, 00:05:41.914 { 00:05:41.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.914 "dma_device_type": 2 00:05:41.914 } 00:05:41.914 ], 00:05:41.914 "driver_specific": { 00:05:41.914 "passthru": { 00:05:41.914 "name": "Passthru0", 00:05:41.914 "base_bdev_name": "Malloc0" 00:05:41.914 } 00:05:41.914 } 00:05:41.914 } 00:05:41.914 ]' 00:05:41.914 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.914 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.914 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.914 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.914 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.914 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.914 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:41.914 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.914 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.914 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.914 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.914 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.914 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.914 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.914 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.914 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.177 ************************************ 00:05:42.177 END TEST rpc_integrity 00:05:42.177 ************************************ 00:05:42.177 16:31:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.177 00:05:42.177 real 0m0.338s 00:05:42.177 user 0m0.200s 00:05:42.177 sys 0m0.058s 00:05:42.177 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.177 16:31:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.177 16:31:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:42.177 16:31:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.177 16:31:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.177 16:31:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.177 ************************************ 00:05:42.177 START TEST rpc_plugins 00:05:42.177 ************************************ 00:05:42.177 16:31:40 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:42.177 16:31:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:42.177 16:31:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.177 16:31:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.177 16:31:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.177 16:31:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:42.177 16:31:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:42.177 16:31:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.177 16:31:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.177 16:31:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.177 16:31:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:42.177 { 00:05:42.177 "name": "Malloc1", 00:05:42.177 "aliases": [ 00:05:42.177 "a621bd02-6863-427b-8fb9-803414ea5d35" 00:05:42.177 ], 00:05:42.177 "product_name": "Malloc disk", 00:05:42.177 "block_size": 4096, 00:05:42.177 "num_blocks": 256, 00:05:42.177 "uuid": "a621bd02-6863-427b-8fb9-803414ea5d35", 00:05:42.177 "assigned_rate_limits": { 00:05:42.177 "rw_ios_per_sec": 0, 00:05:42.177 "rw_mbytes_per_sec": 0, 00:05:42.177 "r_mbytes_per_sec": 0, 00:05:42.177 "w_mbytes_per_sec": 0 00:05:42.177 }, 00:05:42.177 "claimed": false, 00:05:42.177 "zoned": false, 00:05:42.177 "supported_io_types": { 00:05:42.177 "read": true, 00:05:42.177 "write": true, 00:05:42.177 "unmap": true, 00:05:42.177 "flush": true, 00:05:42.177 "reset": true, 00:05:42.177 "nvme_admin": false, 00:05:42.177 "nvme_io": false, 00:05:42.177 "nvme_io_md": false, 00:05:42.177 "write_zeroes": true, 00:05:42.177 "zcopy": true, 00:05:42.177 "get_zone_info": false, 00:05:42.177 "zone_management": false, 00:05:42.177 "zone_append": false, 00:05:42.177 "compare": false, 00:05:42.177 "compare_and_write": false, 00:05:42.177 "abort": true, 00:05:42.177 "seek_hole": false, 00:05:42.177 "seek_data": false, 00:05:42.177 "copy": true, 00:05:42.177 "nvme_iov_md": false 00:05:42.177 }, 00:05:42.177 "memory_domains": [ 00:05:42.177 { 00:05:42.177 "dma_device_id": "system", 00:05:42.177 "dma_device_type": 1 00:05:42.177 }, 00:05:42.177 { 00:05:42.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.177 "dma_device_type": 2 00:05:42.177 } 00:05:42.177 ], 00:05:42.177 "driver_specific": {} 00:05:42.177 } 00:05:42.177 ]' 00:05:42.177 16:31:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:42.177 16:31:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:42.177 16:31:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:42.177 16:31:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.177 16:31:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.177 16:31:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.177 16:31:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:42.177 16:31:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.177 16:31:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.177 16:31:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.178 16:31:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:42.178 16:31:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:42.437 16:31:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:42.437 00:05:42.437 real 0m0.166s 00:05:42.437 user 0m0.101s 00:05:42.437 sys 0m0.028s 00:05:42.437 16:31:41 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.437 16:31:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.437 ************************************ 00:05:42.437 END TEST rpc_plugins 00:05:42.437 ************************************ 00:05:42.437 16:31:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:42.437 16:31:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.437 16:31:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.437 16:31:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.437 ************************************ 00:05:42.437 START TEST rpc_trace_cmd_test 00:05:42.437 ************************************ 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:42.437 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69325", 00:05:42.437 "tpoint_group_mask": "0x8", 00:05:42.437 "iscsi_conn": { 00:05:42.437 "mask": "0x2", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "scsi": { 00:05:42.437 "mask": "0x4", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "bdev": { 00:05:42.437 "mask": "0x8", 00:05:42.437 "tpoint_mask": "0xffffffffffffffff" 00:05:42.437 }, 00:05:42.437 "nvmf_rdma": { 00:05:42.437 "mask": "0x10", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "nvmf_tcp": { 00:05:42.437 "mask": "0x20", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "ftl": { 00:05:42.437 "mask": "0x40", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "blobfs": { 00:05:42.437 "mask": "0x80", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "dsa": { 00:05:42.437 "mask": "0x200", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "thread": { 00:05:42.437 "mask": "0x400", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "nvme_pcie": { 00:05:42.437 "mask": "0x800", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "iaa": { 00:05:42.437 "mask": "0x1000", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "nvme_tcp": { 00:05:42.437 "mask": "0x2000", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "bdev_nvme": { 00:05:42.437 "mask": "0x4000", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "sock": { 00:05:42.437 "mask": "0x8000", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "blob": { 00:05:42.437 "mask": "0x10000", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 }, 00:05:42.437 "bdev_raid": { 00:05:42.437 "mask": "0x20000", 00:05:42.437 "tpoint_mask": "0x0" 00:05:42.437 } 00:05:42.437 }' 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.437 16:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.696 16:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.696 16:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.696 ************************************ 00:05:42.696 END TEST rpc_trace_cmd_test 00:05:42.696 ************************************ 00:05:42.696 16:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.696 00:05:42.696 real 0m0.267s 00:05:42.696 user 0m0.213s 00:05:42.696 sys 0m0.040s 00:05:42.696 16:31:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.696 16:31:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.696 16:31:41 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.696 16:31:41 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.696 16:31:41 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.696 16:31:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.696 16:31:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.696 16:31:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.696 ************************************ 00:05:42.696 START TEST rpc_daemon_integrity 00:05:42.696 ************************************ 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.696 { 00:05:42.696 "name": "Malloc2", 00:05:42.696 "aliases": [ 00:05:42.696 "c10d17d5-218c-4d3b-bc83-ce040da01630" 00:05:42.696 ], 00:05:42.696 "product_name": "Malloc disk", 00:05:42.696 "block_size": 512, 00:05:42.696 "num_blocks": 16384, 00:05:42.696 "uuid": "c10d17d5-218c-4d3b-bc83-ce040da01630", 00:05:42.696 "assigned_rate_limits": { 00:05:42.696 "rw_ios_per_sec": 0, 00:05:42.696 "rw_mbytes_per_sec": 0, 00:05:42.696 "r_mbytes_per_sec": 0, 00:05:42.696 "w_mbytes_per_sec": 0 00:05:42.696 }, 00:05:42.696 "claimed": false, 00:05:42.696 "zoned": false, 00:05:42.696 "supported_io_types": { 00:05:42.696 "read": true, 00:05:42.696 "write": true, 00:05:42.696 "unmap": true, 00:05:42.696 "flush": true, 00:05:42.696 "reset": true, 00:05:42.696 "nvme_admin": false, 00:05:42.696 "nvme_io": false, 00:05:42.696 "nvme_io_md": false, 00:05:42.696 "write_zeroes": true, 00:05:42.696 "zcopy": true, 00:05:42.696 "get_zone_info": false, 00:05:42.696 "zone_management": false, 00:05:42.696 "zone_append": false, 00:05:42.696 "compare": false, 00:05:42.696 "compare_and_write": false, 00:05:42.696 "abort": true, 00:05:42.696 "seek_hole": false, 00:05:42.696 "seek_data": false, 00:05:42.696 "copy": true, 00:05:42.696 "nvme_iov_md": false 00:05:42.696 }, 00:05:42.696 "memory_domains": [ 00:05:42.696 { 00:05:42.696 "dma_device_id": "system", 00:05:42.696 "dma_device_type": 1 00:05:42.696 }, 00:05:42.696 { 00:05:42.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.696 "dma_device_type": 2 00:05:42.696 } 00:05:42.696 ], 00:05:42.696 "driver_specific": {} 00:05:42.696 } 00:05:42.696 ]' 00:05:42.696 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.955 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.955 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.955 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.955 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.955 [2024-12-07 16:31:41.636570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.955 [2024-12-07 16:31:41.636656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.955 [2024-12-07 16:31:41.636687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:42.955 [2024-12-07 16:31:41.636699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.955 [2024-12-07 16:31:41.639244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.955 [2024-12-07 16:31:41.639301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.955 Passthru0 00:05:42.955 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.956 { 00:05:42.956 "name": "Malloc2", 00:05:42.956 "aliases": [ 00:05:42.956 "c10d17d5-218c-4d3b-bc83-ce040da01630" 00:05:42.956 ], 00:05:42.956 "product_name": "Malloc disk", 00:05:42.956 "block_size": 512, 00:05:42.956 "num_blocks": 16384, 00:05:42.956 "uuid": "c10d17d5-218c-4d3b-bc83-ce040da01630", 00:05:42.956 "assigned_rate_limits": { 00:05:42.956 "rw_ios_per_sec": 0, 00:05:42.956 "rw_mbytes_per_sec": 0, 00:05:42.956 "r_mbytes_per_sec": 0, 00:05:42.956 "w_mbytes_per_sec": 0 00:05:42.956 }, 00:05:42.956 "claimed": true, 00:05:42.956 "claim_type": "exclusive_write", 00:05:42.956 "zoned": false, 00:05:42.956 "supported_io_types": { 00:05:42.956 "read": true, 00:05:42.956 "write": true, 00:05:42.956 "unmap": true, 00:05:42.956 "flush": true, 00:05:42.956 "reset": true, 00:05:42.956 "nvme_admin": false, 00:05:42.956 "nvme_io": false, 00:05:42.956 "nvme_io_md": false, 00:05:42.956 "write_zeroes": true, 00:05:42.956 "zcopy": true, 00:05:42.956 "get_zone_info": false, 00:05:42.956 "zone_management": false, 00:05:42.956 "zone_append": false, 00:05:42.956 "compare": false, 00:05:42.956 "compare_and_write": false, 00:05:42.956 "abort": true, 00:05:42.956 "seek_hole": false, 00:05:42.956 "seek_data": false, 00:05:42.956 "copy": true, 00:05:42.956 "nvme_iov_md": false 00:05:42.956 }, 00:05:42.956 "memory_domains": [ 00:05:42.956 { 00:05:42.956 "dma_device_id": "system", 00:05:42.956 "dma_device_type": 1 00:05:42.956 }, 00:05:42.956 { 00:05:42.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.956 "dma_device_type": 2 00:05:42.956 } 00:05:42.956 ], 00:05:42.956 "driver_specific": {} 00:05:42.956 }, 00:05:42.956 { 00:05:42.956 "name": "Passthru0", 00:05:42.956 "aliases": [ 00:05:42.956 "0ad25e68-bb47-5b36-b5d6-7635a1faa10d" 00:05:42.956 ], 00:05:42.956 "product_name": "passthru", 00:05:42.956 "block_size": 512, 00:05:42.956 "num_blocks": 16384, 00:05:42.956 "uuid": "0ad25e68-bb47-5b36-b5d6-7635a1faa10d", 00:05:42.956 "assigned_rate_limits": { 00:05:42.956 "rw_ios_per_sec": 0, 00:05:42.956 "rw_mbytes_per_sec": 0, 00:05:42.956 "r_mbytes_per_sec": 0, 00:05:42.956 "w_mbytes_per_sec": 0 00:05:42.956 }, 00:05:42.956 "claimed": false, 00:05:42.956 "zoned": false, 00:05:42.956 "supported_io_types": { 00:05:42.956 "read": true, 00:05:42.956 "write": true, 00:05:42.956 "unmap": true, 00:05:42.956 "flush": true, 00:05:42.956 "reset": true, 00:05:42.956 "nvme_admin": false, 00:05:42.956 "nvme_io": false, 00:05:42.956 "nvme_io_md": false, 00:05:42.956 "write_zeroes": true, 00:05:42.956 "zcopy": true, 00:05:42.956 "get_zone_info": false, 00:05:42.956 "zone_management": false, 00:05:42.956 "zone_append": false, 00:05:42.956 "compare": false, 00:05:42.956 "compare_and_write": false, 00:05:42.956 "abort": true, 00:05:42.956 "seek_hole": false, 00:05:42.956 "seek_data": false, 00:05:42.956 "copy": true, 00:05:42.956 "nvme_iov_md": false 00:05:42.956 }, 00:05:42.956 "memory_domains": [ 00:05:42.956 { 00:05:42.956 "dma_device_id": "system", 00:05:42.956 "dma_device_type": 1 00:05:42.956 }, 00:05:42.956 { 00:05:42.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.956 "dma_device_type": 2 00:05:42.956 } 00:05:42.956 ], 00:05:42.956 "driver_specific": { 00:05:42.956 "passthru": { 00:05:42.956 "name": "Passthru0", 00:05:42.956 "base_bdev_name": "Malloc2" 00:05:42.956 } 00:05:42.956 } 00:05:42.956 } 00:05:42.956 ]' 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.956 00:05:42.956 real 0m0.330s 00:05:42.956 user 0m0.189s 00:05:42.956 sys 0m0.069s 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.956 ************************************ 00:05:42.956 END TEST rpc_daemon_integrity 00:05:42.956 ************************************ 00:05:42.956 16:31:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.215 16:31:41 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:43.215 16:31:41 rpc -- rpc/rpc.sh@84 -- # killprocess 69325 00:05:43.215 16:31:41 rpc -- common/autotest_common.sh@950 -- # '[' -z 69325 ']' 00:05:43.215 16:31:41 rpc -- common/autotest_common.sh@954 -- # kill -0 69325 00:05:43.215 16:31:41 rpc -- common/autotest_common.sh@955 -- # uname 00:05:43.215 16:31:41 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.215 16:31:41 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69325 00:05:43.215 killing process with pid 69325 00:05:43.215 16:31:41 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.215 16:31:41 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.215 16:31:41 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69325' 00:05:43.215 16:31:41 rpc -- common/autotest_common.sh@969 -- # kill 69325 00:05:43.215 16:31:41 rpc -- common/autotest_common.sh@974 -- # wait 69325 00:05:43.475 00:05:43.475 real 0m2.984s 00:05:43.475 user 0m3.579s 00:05:43.475 sys 0m0.917s 00:05:43.475 16:31:42 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.475 16:31:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.475 ************************************ 00:05:43.475 END TEST rpc 00:05:43.475 ************************************ 00:05:43.475 16:31:42 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:43.475 16:31:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.475 16:31:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.475 16:31:42 -- common/autotest_common.sh@10 -- # set +x 00:05:43.735 ************************************ 00:05:43.735 START TEST skip_rpc 00:05:43.735 ************************************ 00:05:43.735 16:31:42 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:43.735 * Looking for test storage... 00:05:43.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:43.735 16:31:42 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:43.735 16:31:42 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:43.735 16:31:42 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:43.735 16:31:42 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.735 16:31:42 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:43.735 16:31:42 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.735 16:31:42 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:43.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.735 --rc genhtml_branch_coverage=1 00:05:43.735 --rc genhtml_function_coverage=1 00:05:43.735 --rc genhtml_legend=1 00:05:43.735 --rc geninfo_all_blocks=1 00:05:43.735 --rc geninfo_unexecuted_blocks=1 00:05:43.735 00:05:43.735 ' 00:05:43.735 16:31:42 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:43.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.735 --rc genhtml_branch_coverage=1 00:05:43.735 --rc genhtml_function_coverage=1 00:05:43.735 --rc genhtml_legend=1 00:05:43.735 --rc geninfo_all_blocks=1 00:05:43.735 --rc geninfo_unexecuted_blocks=1 00:05:43.735 00:05:43.735 ' 00:05:43.735 16:31:42 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:43.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.735 --rc genhtml_branch_coverage=1 00:05:43.735 --rc genhtml_function_coverage=1 00:05:43.735 --rc genhtml_legend=1 00:05:43.735 --rc geninfo_all_blocks=1 00:05:43.735 --rc geninfo_unexecuted_blocks=1 00:05:43.735 00:05:43.735 ' 00:05:43.735 16:31:42 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:43.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.735 --rc genhtml_branch_coverage=1 00:05:43.735 --rc genhtml_function_coverage=1 00:05:43.735 --rc genhtml_legend=1 00:05:43.735 --rc geninfo_all_blocks=1 00:05:43.735 --rc geninfo_unexecuted_blocks=1 00:05:43.735 00:05:43.735 ' 00:05:43.735 16:31:42 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:43.735 16:31:42 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:43.735 16:31:42 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:43.735 16:31:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.735 16:31:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.735 16:31:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.735 ************************************ 00:05:43.735 START TEST skip_rpc 00:05:43.735 ************************************ 00:05:43.735 16:31:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:43.735 16:31:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69531 00:05:43.735 16:31:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:43.735 16:31:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.735 16:31:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.995 [2024-12-07 16:31:42.714689] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:43.995 [2024-12-07 16:31:42.714908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69531 ] 00:05:43.995 [2024-12-07 16:31:42.878569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.255 [2024-12-07 16:31:42.933006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.537 16:31:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:49.537 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:49.537 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:49.537 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:49.537 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.537 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:49.537 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.537 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:49.537 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.537 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69531 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69531 ']' 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69531 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69531 00:05:49.538 killing process with pid 69531 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69531' 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69531 00:05:49.538 16:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69531 00:05:49.538 00:05:49.538 real 0m5.464s 00:05:49.538 user 0m5.065s 00:05:49.538 sys 0m0.326s 00:05:49.538 16:31:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.538 16:31:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.538 ************************************ 00:05:49.538 END TEST skip_rpc 00:05:49.538 ************************************ 00:05:49.538 16:31:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:49.538 16:31:48 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.538 16:31:48 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.538 16:31:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.538 ************************************ 00:05:49.538 START TEST skip_rpc_with_json 00:05:49.538 ************************************ 00:05:49.538 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:49.538 16:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:49.538 16:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69614 00:05:49.538 16:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.538 16:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.538 16:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69614 00:05:49.538 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69614 ']' 00:05:49.538 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.538 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.538 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.538 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.538 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.538 [2024-12-07 16:31:48.252401] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:49.538 [2024-12-07 16:31:48.252643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69614 ] 00:05:49.538 [2024-12-07 16:31:48.416490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.798 [2024-12-07 16:31:48.470426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.368 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.368 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:50.368 16:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:50.368 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.368 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.368 [2024-12-07 16:31:49.089911] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:50.368 request: 00:05:50.368 { 00:05:50.368 "trtype": "tcp", 00:05:50.368 "method": "nvmf_get_transports", 00:05:50.368 "req_id": 1 00:05:50.368 } 00:05:50.368 Got JSON-RPC error response 00:05:50.368 response: 00:05:50.368 { 00:05:50.368 "code": -19, 00:05:50.368 "message": "No such device" 00:05:50.368 } 00:05:50.368 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:50.368 16:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:50.368 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.368 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.368 [2024-12-07 16:31:49.102036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.368 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.368 16:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:50.368 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.368 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:50.629 { 00:05:50.629 "subsystems": [ 00:05:50.629 { 00:05:50.629 "subsystem": "fsdev", 00:05:50.629 "config": [ 00:05:50.629 { 00:05:50.629 "method": "fsdev_set_opts", 00:05:50.629 "params": { 00:05:50.629 "fsdev_io_pool_size": 65535, 00:05:50.629 "fsdev_io_cache_size": 256 00:05:50.629 } 00:05:50.629 } 00:05:50.629 ] 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "keyring", 00:05:50.629 "config": [] 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "iobuf", 00:05:50.629 "config": [ 00:05:50.629 { 00:05:50.629 "method": "iobuf_set_options", 00:05:50.629 "params": { 00:05:50.629 "small_pool_count": 8192, 00:05:50.629 "large_pool_count": 1024, 00:05:50.629 "small_bufsize": 8192, 00:05:50.629 "large_bufsize": 135168 00:05:50.629 } 00:05:50.629 } 00:05:50.629 ] 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "sock", 00:05:50.629 "config": [ 00:05:50.629 { 00:05:50.629 "method": "sock_set_default_impl", 00:05:50.629 "params": { 00:05:50.629 "impl_name": "posix" 00:05:50.629 } 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "method": "sock_impl_set_options", 00:05:50.629 "params": { 00:05:50.629 "impl_name": "ssl", 00:05:50.629 "recv_buf_size": 4096, 00:05:50.629 "send_buf_size": 4096, 00:05:50.629 "enable_recv_pipe": true, 00:05:50.629 "enable_quickack": false, 00:05:50.629 "enable_placement_id": 0, 00:05:50.629 "enable_zerocopy_send_server": true, 00:05:50.629 "enable_zerocopy_send_client": false, 00:05:50.629 "zerocopy_threshold": 0, 00:05:50.629 "tls_version": 0, 00:05:50.629 "enable_ktls": false 00:05:50.629 } 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "method": "sock_impl_set_options", 00:05:50.629 "params": { 00:05:50.629 "impl_name": "posix", 00:05:50.629 "recv_buf_size": 2097152, 00:05:50.629 "send_buf_size": 2097152, 00:05:50.629 "enable_recv_pipe": true, 00:05:50.629 "enable_quickack": false, 00:05:50.629 "enable_placement_id": 0, 00:05:50.629 "enable_zerocopy_send_server": true, 00:05:50.629 "enable_zerocopy_send_client": false, 00:05:50.629 "zerocopy_threshold": 0, 00:05:50.629 "tls_version": 0, 00:05:50.629 "enable_ktls": false 00:05:50.629 } 00:05:50.629 } 00:05:50.629 ] 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "vmd", 00:05:50.629 "config": [] 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "accel", 00:05:50.629 "config": [ 00:05:50.629 { 00:05:50.629 "method": "accel_set_options", 00:05:50.629 "params": { 00:05:50.629 "small_cache_size": 128, 00:05:50.629 "large_cache_size": 16, 00:05:50.629 "task_count": 2048, 00:05:50.629 "sequence_count": 2048, 00:05:50.629 "buf_count": 2048 00:05:50.629 } 00:05:50.629 } 00:05:50.629 ] 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "bdev", 00:05:50.629 "config": [ 00:05:50.629 { 00:05:50.629 "method": "bdev_set_options", 00:05:50.629 "params": { 00:05:50.629 "bdev_io_pool_size": 65535, 00:05:50.629 "bdev_io_cache_size": 256, 00:05:50.629 "bdev_auto_examine": true, 00:05:50.629 "iobuf_small_cache_size": 128, 00:05:50.629 "iobuf_large_cache_size": 16 00:05:50.629 } 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "method": "bdev_raid_set_options", 00:05:50.629 "params": { 00:05:50.629 "process_window_size_kb": 1024, 00:05:50.629 "process_max_bandwidth_mb_sec": 0 00:05:50.629 } 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "method": "bdev_iscsi_set_options", 00:05:50.629 "params": { 00:05:50.629 "timeout_sec": 30 00:05:50.629 } 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "method": "bdev_nvme_set_options", 00:05:50.629 "params": { 00:05:50.629 "action_on_timeout": "none", 00:05:50.629 "timeout_us": 0, 00:05:50.629 "timeout_admin_us": 0, 00:05:50.629 "keep_alive_timeout_ms": 10000, 00:05:50.629 "arbitration_burst": 0, 00:05:50.629 "low_priority_weight": 0, 00:05:50.629 "medium_priority_weight": 0, 00:05:50.629 "high_priority_weight": 0, 00:05:50.629 "nvme_adminq_poll_period_us": 10000, 00:05:50.629 "nvme_ioq_poll_period_us": 0, 00:05:50.629 "io_queue_requests": 0, 00:05:50.629 "delay_cmd_submit": true, 00:05:50.629 "transport_retry_count": 4, 00:05:50.629 "bdev_retry_count": 3, 00:05:50.629 "transport_ack_timeout": 0, 00:05:50.629 "ctrlr_loss_timeout_sec": 0, 00:05:50.629 "reconnect_delay_sec": 0, 00:05:50.629 "fast_io_fail_timeout_sec": 0, 00:05:50.629 "disable_auto_failback": false, 00:05:50.629 "generate_uuids": false, 00:05:50.629 "transport_tos": 0, 00:05:50.629 "nvme_error_stat": false, 00:05:50.629 "rdma_srq_size": 0, 00:05:50.629 "io_path_stat": false, 00:05:50.629 "allow_accel_sequence": false, 00:05:50.629 "rdma_max_cq_size": 0, 00:05:50.629 "rdma_cm_event_timeout_ms": 0, 00:05:50.629 "dhchap_digests": [ 00:05:50.629 "sha256", 00:05:50.629 "sha384", 00:05:50.629 "sha512" 00:05:50.629 ], 00:05:50.629 "dhchap_dhgroups": [ 00:05:50.629 "null", 00:05:50.629 "ffdhe2048", 00:05:50.629 "ffdhe3072", 00:05:50.629 "ffdhe4096", 00:05:50.629 "ffdhe6144", 00:05:50.629 "ffdhe8192" 00:05:50.629 ] 00:05:50.629 } 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "method": "bdev_nvme_set_hotplug", 00:05:50.629 "params": { 00:05:50.629 "period_us": 100000, 00:05:50.629 "enable": false 00:05:50.629 } 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "method": "bdev_wait_for_examine" 00:05:50.629 } 00:05:50.629 ] 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "scsi", 00:05:50.629 "config": null 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "scheduler", 00:05:50.629 "config": [ 00:05:50.629 { 00:05:50.629 "method": "framework_set_scheduler", 00:05:50.629 "params": { 00:05:50.629 "name": "static" 00:05:50.629 } 00:05:50.629 } 00:05:50.629 ] 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "vhost_scsi", 00:05:50.629 "config": [] 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "vhost_blk", 00:05:50.629 "config": [] 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "ublk", 00:05:50.629 "config": [] 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "nbd", 00:05:50.629 "config": [] 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "nvmf", 00:05:50.629 "config": [ 00:05:50.629 { 00:05:50.629 "method": "nvmf_set_config", 00:05:50.629 "params": { 00:05:50.629 "discovery_filter": "match_any", 00:05:50.629 "admin_cmd_passthru": { 00:05:50.629 "identify_ctrlr": false 00:05:50.629 }, 00:05:50.629 "dhchap_digests": [ 00:05:50.629 "sha256", 00:05:50.629 "sha384", 00:05:50.629 "sha512" 00:05:50.629 ], 00:05:50.629 "dhchap_dhgroups": [ 00:05:50.629 "null", 00:05:50.629 "ffdhe2048", 00:05:50.629 "ffdhe3072", 00:05:50.629 "ffdhe4096", 00:05:50.629 "ffdhe6144", 00:05:50.629 "ffdhe8192" 00:05:50.629 ] 00:05:50.629 } 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "method": "nvmf_set_max_subsystems", 00:05:50.629 "params": { 00:05:50.629 "max_subsystems": 1024 00:05:50.629 } 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "method": "nvmf_set_crdt", 00:05:50.629 "params": { 00:05:50.629 "crdt1": 0, 00:05:50.629 "crdt2": 0, 00:05:50.629 "crdt3": 0 00:05:50.629 } 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "method": "nvmf_create_transport", 00:05:50.629 "params": { 00:05:50.629 "trtype": "TCP", 00:05:50.629 "max_queue_depth": 128, 00:05:50.629 "max_io_qpairs_per_ctrlr": 127, 00:05:50.629 "in_capsule_data_size": 4096, 00:05:50.629 "max_io_size": 131072, 00:05:50.629 "io_unit_size": 131072, 00:05:50.629 "max_aq_depth": 128, 00:05:50.629 "num_shared_buffers": 511, 00:05:50.629 "buf_cache_size": 4294967295, 00:05:50.629 "dif_insert_or_strip": false, 00:05:50.629 "zcopy": false, 00:05:50.629 "c2h_success": true, 00:05:50.629 "sock_priority": 0, 00:05:50.629 "abort_timeout_sec": 1, 00:05:50.629 "ack_timeout": 0, 00:05:50.629 "data_wr_pool_size": 0 00:05:50.629 } 00:05:50.629 } 00:05:50.629 ] 00:05:50.629 }, 00:05:50.629 { 00:05:50.629 "subsystem": "iscsi", 00:05:50.629 "config": [ 00:05:50.629 { 00:05:50.629 "method": "iscsi_set_options", 00:05:50.629 "params": { 00:05:50.629 "node_base": "iqn.2016-06.io.spdk", 00:05:50.629 "max_sessions": 128, 00:05:50.629 "max_connections_per_session": 2, 00:05:50.629 "max_queue_depth": 64, 00:05:50.629 "default_time2wait": 2, 00:05:50.629 "default_time2retain": 20, 00:05:50.629 "first_burst_length": 8192, 00:05:50.629 "immediate_data": true, 00:05:50.629 "allow_duplicated_isid": false, 00:05:50.629 "error_recovery_level": 0, 00:05:50.629 "nop_timeout": 60, 00:05:50.629 "nop_in_interval": 30, 00:05:50.629 "disable_chap": false, 00:05:50.629 "require_chap": false, 00:05:50.629 "mutual_chap": false, 00:05:50.629 "chap_group": 0, 00:05:50.629 "max_large_datain_per_connection": 64, 00:05:50.629 "max_r2t_per_connection": 4, 00:05:50.629 "pdu_pool_size": 36864, 00:05:50.629 "immediate_data_pool_size": 16384, 00:05:50.629 "data_out_pool_size": 2048 00:05:50.629 } 00:05:50.629 } 00:05:50.629 ] 00:05:50.629 } 00:05:50.629 ] 00:05:50.629 } 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69614 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69614 ']' 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69614 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69614 00:05:50.629 killing process with pid 69614 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69614' 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69614 00:05:50.629 16:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69614 00:05:50.889 16:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69642 00:05:50.889 16:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:50.889 16:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:56.226 16:31:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69642 00:05:56.226 16:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69642 ']' 00:05:56.226 16:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69642 00:05:56.226 16:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:56.226 16:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.226 16:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69642 00:05:56.226 killing process with pid 69642 00:05:56.226 16:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.226 16:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.226 16:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69642' 00:05:56.226 16:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69642 00:05:56.226 16:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69642 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:56.485 ************************************ 00:05:56.485 END TEST skip_rpc_with_json 00:05:56.485 ************************************ 00:05:56.485 00:05:56.485 real 0m6.997s 00:05:56.485 user 0m6.535s 00:05:56.485 sys 0m0.760s 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.485 16:31:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:56.485 16:31:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.485 16:31:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.485 16:31:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.485 ************************************ 00:05:56.485 START TEST skip_rpc_with_delay 00:05:56.485 ************************************ 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.485 [2024-12-07 16:31:55.315478] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:56.485 [2024-12-07 16:31:55.315677] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.485 00:05:56.485 real 0m0.157s 00:05:56.485 user 0m0.079s 00:05:56.485 sys 0m0.077s 00:05:56.485 ************************************ 00:05:56.485 END TEST skip_rpc_with_delay 00:05:56.485 ************************************ 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.485 16:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:56.744 16:31:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:56.744 16:31:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:56.744 16:31:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:56.744 16:31:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.744 16:31:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.744 16:31:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.744 ************************************ 00:05:56.744 START TEST exit_on_failed_rpc_init 00:05:56.744 ************************************ 00:05:56.744 16:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:56.744 16:31:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69754 00:05:56.744 16:31:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.744 16:31:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69754 00:05:56.744 16:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69754 ']' 00:05:56.744 16:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.744 16:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.744 16:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.744 16:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.744 16:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.744 [2024-12-07 16:31:55.542199] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:56.744 [2024-12-07 16:31:55.542352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69754 ] 00:05:57.003 [2024-12-07 16:31:55.689574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.003 [2024-12-07 16:31:55.733463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:57.570 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.570 [2024-12-07 16:31:56.462484] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:57.570 [2024-12-07 16:31:56.462702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69772 ] 00:05:57.829 [2024-12-07 16:31:56.617321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.829 [2024-12-07 16:31:56.686122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.829 [2024-12-07 16:31:56.686317] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:57.829 [2024-12-07 16:31:56.686386] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:57.829 [2024-12-07 16:31:56.686476] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69754 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69754 ']' 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69754 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69754 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69754' 00:05:58.087 killing process with pid 69754 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69754 00:05:58.087 16:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69754 00:05:58.654 00:05:58.654 real 0m1.829s 00:05:58.654 user 0m2.005s 00:05:58.654 sys 0m0.537s 00:05:58.654 16:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.654 ************************************ 00:05:58.654 END TEST exit_on_failed_rpc_init 00:05:58.654 ************************************ 00:05:58.654 16:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:58.654 16:31:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:58.654 ************************************ 00:05:58.654 END TEST skip_rpc 00:05:58.654 ************************************ 00:05:58.654 00:05:58.654 real 0m14.957s 00:05:58.654 user 0m13.910s 00:05:58.654 sys 0m2.001s 00:05:58.654 16:31:57 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.654 16:31:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.654 16:31:57 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:58.654 16:31:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.654 16:31:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.654 16:31:57 -- common/autotest_common.sh@10 -- # set +x 00:05:58.654 ************************************ 00:05:58.654 START TEST rpc_client 00:05:58.654 ************************************ 00:05:58.654 16:31:57 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:58.654 * Looking for test storage... 00:05:58.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:58.654 16:31:57 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:58.654 16:31:57 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:58.654 16:31:57 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:58.913 16:31:57 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.913 16:31:57 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:58.913 16:31:57 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.913 16:31:57 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:58.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.913 --rc genhtml_branch_coverage=1 00:05:58.913 --rc genhtml_function_coverage=1 00:05:58.913 --rc genhtml_legend=1 00:05:58.913 --rc geninfo_all_blocks=1 00:05:58.913 --rc geninfo_unexecuted_blocks=1 00:05:58.913 00:05:58.913 ' 00:05:58.913 16:31:57 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:58.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.913 --rc genhtml_branch_coverage=1 00:05:58.913 --rc genhtml_function_coverage=1 00:05:58.913 --rc genhtml_legend=1 00:05:58.913 --rc geninfo_all_blocks=1 00:05:58.913 --rc geninfo_unexecuted_blocks=1 00:05:58.913 00:05:58.913 ' 00:05:58.913 16:31:57 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:58.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.913 --rc genhtml_branch_coverage=1 00:05:58.913 --rc genhtml_function_coverage=1 00:05:58.913 --rc genhtml_legend=1 00:05:58.913 --rc geninfo_all_blocks=1 00:05:58.913 --rc geninfo_unexecuted_blocks=1 00:05:58.913 00:05:58.913 ' 00:05:58.913 16:31:57 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:58.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.913 --rc genhtml_branch_coverage=1 00:05:58.913 --rc genhtml_function_coverage=1 00:05:58.913 --rc genhtml_legend=1 00:05:58.913 --rc geninfo_all_blocks=1 00:05:58.913 --rc geninfo_unexecuted_blocks=1 00:05:58.913 00:05:58.913 ' 00:05:58.913 16:31:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:58.913 OK 00:05:58.913 16:31:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:58.913 ************************************ 00:05:58.913 END TEST rpc_client 00:05:58.913 ************************************ 00:05:58.913 00:05:58.913 real 0m0.293s 00:05:58.913 user 0m0.168s 00:05:58.913 sys 0m0.142s 00:05:58.913 16:31:57 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.913 16:31:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:58.913 16:31:57 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:58.913 16:31:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.913 16:31:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.913 16:31:57 -- common/autotest_common.sh@10 -- # set +x 00:05:58.913 ************************************ 00:05:58.913 START TEST json_config 00:05:58.913 ************************************ 00:05:58.913 16:31:57 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:59.173 16:31:57 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:59.173 16:31:57 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:59.173 16:31:57 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:59.173 16:31:57 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:59.173 16:31:57 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.173 16:31:57 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.173 16:31:57 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.173 16:31:57 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.173 16:31:57 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.173 16:31:57 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.173 16:31:57 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.173 16:31:57 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.173 16:31:57 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.173 16:31:57 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.173 16:31:57 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.173 16:31:57 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:59.173 16:31:57 json_config -- scripts/common.sh@345 -- # : 1 00:05:59.173 16:31:57 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.173 16:31:57 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.173 16:31:57 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:59.173 16:31:57 json_config -- scripts/common.sh@353 -- # local d=1 00:05:59.173 16:31:57 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.173 16:31:57 json_config -- scripts/common.sh@355 -- # echo 1 00:05:59.173 16:31:57 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.173 16:31:57 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:59.173 16:31:57 json_config -- scripts/common.sh@353 -- # local d=2 00:05:59.173 16:31:57 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.173 16:31:57 json_config -- scripts/common.sh@355 -- # echo 2 00:05:59.173 16:31:57 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.173 16:31:57 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.173 16:31:57 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.173 16:31:57 json_config -- scripts/common.sh@368 -- # return 0 00:05:59.173 16:31:57 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.173 16:31:57 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:59.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.173 --rc genhtml_branch_coverage=1 00:05:59.173 --rc genhtml_function_coverage=1 00:05:59.173 --rc genhtml_legend=1 00:05:59.173 --rc geninfo_all_blocks=1 00:05:59.173 --rc geninfo_unexecuted_blocks=1 00:05:59.173 00:05:59.173 ' 00:05:59.173 16:31:57 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:59.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.173 --rc genhtml_branch_coverage=1 00:05:59.173 --rc genhtml_function_coverage=1 00:05:59.173 --rc genhtml_legend=1 00:05:59.173 --rc geninfo_all_blocks=1 00:05:59.173 --rc geninfo_unexecuted_blocks=1 00:05:59.173 00:05:59.173 ' 00:05:59.173 16:31:57 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:59.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.173 --rc genhtml_branch_coverage=1 00:05:59.173 --rc genhtml_function_coverage=1 00:05:59.173 --rc genhtml_legend=1 00:05:59.173 --rc geninfo_all_blocks=1 00:05:59.173 --rc geninfo_unexecuted_blocks=1 00:05:59.173 00:05:59.173 ' 00:05:59.173 16:31:57 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:59.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.173 --rc genhtml_branch_coverage=1 00:05:59.173 --rc genhtml_function_coverage=1 00:05:59.173 --rc genhtml_legend=1 00:05:59.173 --rc geninfo_all_blocks=1 00:05:59.173 --rc geninfo_unexecuted_blocks=1 00:05:59.173 00:05:59.173 ' 00:05:59.173 16:31:57 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3b9abcfe-3bac-4150-8795-ff18896db5ae 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3b9abcfe-3bac-4150-8795-ff18896db5ae 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:59.173 16:31:57 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.173 16:31:57 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.173 16:31:57 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.173 16:31:57 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.173 16:31:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.173 16:31:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.173 16:31:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.173 16:31:57 json_config -- paths/export.sh@5 -- # export PATH 00:05:59.173 16:31:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@51 -- # : 0 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:59.173 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:59.173 16:31:57 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:59.174 16:31:57 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:59.174 16:31:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:59.174 16:31:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:59.174 16:31:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:59.174 16:31:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:59.174 16:31:57 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:59.174 WARNING: No tests are enabled so not running JSON configuration tests 00:05:59.174 16:31:57 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:59.174 00:05:59.174 real 0m0.227s 00:05:59.174 user 0m0.133s 00:05:59.174 sys 0m0.099s 00:05:59.174 16:31:57 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.174 16:31:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.174 ************************************ 00:05:59.174 END TEST json_config 00:05:59.174 ************************************ 00:05:59.174 16:31:58 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:59.174 16:31:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.174 16:31:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.174 16:31:58 -- common/autotest_common.sh@10 -- # set +x 00:05:59.174 ************************************ 00:05:59.174 START TEST json_config_extra_key 00:05:59.174 ************************************ 00:05:59.174 16:31:58 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:59.435 16:31:58 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:59.435 16:31:58 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:59.435 16:31:58 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:59.435 16:31:58 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:59.435 16:31:58 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.435 16:31:58 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:59.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.435 --rc genhtml_branch_coverage=1 00:05:59.435 --rc genhtml_function_coverage=1 00:05:59.435 --rc genhtml_legend=1 00:05:59.435 --rc geninfo_all_blocks=1 00:05:59.435 --rc geninfo_unexecuted_blocks=1 00:05:59.435 00:05:59.435 ' 00:05:59.435 16:31:58 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:59.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.435 --rc genhtml_branch_coverage=1 00:05:59.435 --rc genhtml_function_coverage=1 00:05:59.435 --rc genhtml_legend=1 00:05:59.435 --rc geninfo_all_blocks=1 00:05:59.435 --rc geninfo_unexecuted_blocks=1 00:05:59.435 00:05:59.435 ' 00:05:59.435 16:31:58 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:59.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.435 --rc genhtml_branch_coverage=1 00:05:59.435 --rc genhtml_function_coverage=1 00:05:59.435 --rc genhtml_legend=1 00:05:59.435 --rc geninfo_all_blocks=1 00:05:59.435 --rc geninfo_unexecuted_blocks=1 00:05:59.435 00:05:59.435 ' 00:05:59.435 16:31:58 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:59.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.435 --rc genhtml_branch_coverage=1 00:05:59.435 --rc genhtml_function_coverage=1 00:05:59.435 --rc genhtml_legend=1 00:05:59.435 --rc geninfo_all_blocks=1 00:05:59.435 --rc geninfo_unexecuted_blocks=1 00:05:59.435 00:05:59.435 ' 00:05:59.435 16:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3b9abcfe-3bac-4150-8795-ff18896db5ae 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3b9abcfe-3bac-4150-8795-ff18896db5ae 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.435 16:31:58 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.435 16:31:58 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.435 16:31:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.435 16:31:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.436 16:31:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.436 16:31:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:59.436 16:31:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.436 16:31:58 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:59.436 16:31:58 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:59.436 16:31:58 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:59.436 16:31:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.436 16:31:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.436 16:31:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.436 16:31:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:59.436 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:59.436 16:31:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:59.436 16:31:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:59.436 16:31:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:59.436 16:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:59.436 16:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:59.436 16:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:59.436 16:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:59.436 16:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:59.436 16:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:59.436 16:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:59.436 16:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:59.436 16:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:59.436 16:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:59.436 16:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:59.436 INFO: launching applications... 00:05:59.436 16:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:59.436 16:31:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:59.436 16:31:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:59.436 16:31:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:59.436 16:31:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:59.436 16:31:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:59.436 16:31:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.436 16:31:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.436 16:31:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69960 00:05:59.436 16:31:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:59.436 Waiting for target to run... 00:05:59.436 16:31:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69960 /var/tmp/spdk_tgt.sock 00:05:59.436 16:31:58 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:59.436 16:31:58 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69960 ']' 00:05:59.436 16:31:58 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:59.436 16:31:58 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.436 16:31:58 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:59.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:59.436 16:31:58 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.436 16:31:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.696 [2024-12-07 16:31:58.377000] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:59.696 [2024-12-07 16:31:58.377222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69960 ] 00:05:59.956 [2024-12-07 16:31:58.759053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.956 [2024-12-07 16:31:58.787037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.525 16:31:59 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.525 00:06:00.525 INFO: shutting down applications... 00:06:00.525 16:31:59 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:00.525 16:31:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:00.525 16:31:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:00.525 16:31:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:00.525 16:31:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:00.525 16:31:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:00.525 16:31:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69960 ]] 00:06:00.525 16:31:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69960 00:06:00.525 16:31:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:00.525 16:31:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.525 16:31:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69960 00:06:00.525 16:31:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:01.096 16:31:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:01.096 16:31:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.096 16:31:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69960 00:06:01.096 16:31:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:01.096 16:31:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:01.096 16:31:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:01.096 16:31:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:01.096 SPDK target shutdown done 00:06:01.096 16:31:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:01.096 Success 00:06:01.096 00:06:01.096 real 0m1.641s 00:06:01.096 user 0m1.322s 00:06:01.096 sys 0m0.484s 00:06:01.096 16:31:59 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.096 16:31:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:01.096 ************************************ 00:06:01.096 END TEST json_config_extra_key 00:06:01.096 ************************************ 00:06:01.096 16:31:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.096 16:31:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.096 16:31:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.096 16:31:59 -- common/autotest_common.sh@10 -- # set +x 00:06:01.096 ************************************ 00:06:01.096 START TEST alias_rpc 00:06:01.096 ************************************ 00:06:01.096 16:31:59 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.096 * Looking for test storage... 00:06:01.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:01.096 16:31:59 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:01.096 16:31:59 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:01.096 16:31:59 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:01.096 16:31:59 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.096 16:31:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:01.096 16:31:59 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.096 16:31:59 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:01.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.096 --rc genhtml_branch_coverage=1 00:06:01.096 --rc genhtml_function_coverage=1 00:06:01.096 --rc genhtml_legend=1 00:06:01.096 --rc geninfo_all_blocks=1 00:06:01.096 --rc geninfo_unexecuted_blocks=1 00:06:01.096 00:06:01.096 ' 00:06:01.096 16:31:59 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:01.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.096 --rc genhtml_branch_coverage=1 00:06:01.096 --rc genhtml_function_coverage=1 00:06:01.096 --rc genhtml_legend=1 00:06:01.096 --rc geninfo_all_blocks=1 00:06:01.096 --rc geninfo_unexecuted_blocks=1 00:06:01.096 00:06:01.096 ' 00:06:01.096 16:31:59 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:01.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.096 --rc genhtml_branch_coverage=1 00:06:01.096 --rc genhtml_function_coverage=1 00:06:01.096 --rc genhtml_legend=1 00:06:01.097 --rc geninfo_all_blocks=1 00:06:01.097 --rc geninfo_unexecuted_blocks=1 00:06:01.097 00:06:01.097 ' 00:06:01.097 16:31:59 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:01.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.097 --rc genhtml_branch_coverage=1 00:06:01.097 --rc genhtml_function_coverage=1 00:06:01.097 --rc genhtml_legend=1 00:06:01.097 --rc geninfo_all_blocks=1 00:06:01.097 --rc geninfo_unexecuted_blocks=1 00:06:01.097 00:06:01.097 ' 00:06:01.097 16:31:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:01.097 16:31:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70028 00:06:01.097 16:31:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.097 16:31:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70028 00:06:01.097 16:31:59 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70028 ']' 00:06:01.097 16:31:59 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.097 16:31:59 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.097 16:31:59 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.097 16:31:59 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.097 16:31:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.357 [2024-12-07 16:32:00.080135] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:01.357 [2024-12-07 16:32:00.080353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70028 ] 00:06:01.357 [2024-12-07 16:32:00.239320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.616 [2024-12-07 16:32:00.283327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.186 16:32:00 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.186 16:32:00 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:02.186 16:32:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:02.447 16:32:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70028 00:06:02.447 16:32:01 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70028 ']' 00:06:02.447 16:32:01 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70028 00:06:02.447 16:32:01 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:02.447 16:32:01 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.447 16:32:01 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70028 00:06:02.447 killing process with pid 70028 00:06:02.447 16:32:01 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.447 16:32:01 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.447 16:32:01 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70028' 00:06:02.447 16:32:01 alias_rpc -- common/autotest_common.sh@969 -- # kill 70028 00:06:02.447 16:32:01 alias_rpc -- common/autotest_common.sh@974 -- # wait 70028 00:06:02.707 ************************************ 00:06:02.707 END TEST alias_rpc 00:06:02.707 ************************************ 00:06:02.707 00:06:02.707 real 0m1.820s 00:06:02.707 user 0m1.851s 00:06:02.707 sys 0m0.516s 00:06:02.707 16:32:01 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.707 16:32:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.968 16:32:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:02.968 16:32:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:02.968 16:32:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.968 16:32:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.968 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:06:02.968 ************************************ 00:06:02.968 START TEST spdkcli_tcp 00:06:02.968 ************************************ 00:06:02.968 16:32:01 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:02.968 * Looking for test storage... 00:06:02.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:02.968 16:32:01 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.968 16:32:01 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.968 16:32:01 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:02.968 16:32:01 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:02.968 16:32:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.228 16:32:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.228 16:32:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.228 16:32:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:03.228 16:32:01 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.228 16:32:01 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:03.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.228 --rc genhtml_branch_coverage=1 00:06:03.228 --rc genhtml_function_coverage=1 00:06:03.228 --rc genhtml_legend=1 00:06:03.228 --rc geninfo_all_blocks=1 00:06:03.228 --rc geninfo_unexecuted_blocks=1 00:06:03.228 00:06:03.228 ' 00:06:03.228 16:32:01 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:03.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.228 --rc genhtml_branch_coverage=1 00:06:03.228 --rc genhtml_function_coverage=1 00:06:03.228 --rc genhtml_legend=1 00:06:03.228 --rc geninfo_all_blocks=1 00:06:03.228 --rc geninfo_unexecuted_blocks=1 00:06:03.228 00:06:03.228 ' 00:06:03.228 16:32:01 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:03.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.228 --rc genhtml_branch_coverage=1 00:06:03.228 --rc genhtml_function_coverage=1 00:06:03.228 --rc genhtml_legend=1 00:06:03.228 --rc geninfo_all_blocks=1 00:06:03.228 --rc geninfo_unexecuted_blocks=1 00:06:03.228 00:06:03.228 ' 00:06:03.228 16:32:01 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:03.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.228 --rc genhtml_branch_coverage=1 00:06:03.228 --rc genhtml_function_coverage=1 00:06:03.228 --rc genhtml_legend=1 00:06:03.228 --rc geninfo_all_blocks=1 00:06:03.228 --rc geninfo_unexecuted_blocks=1 00:06:03.228 00:06:03.228 ' 00:06:03.228 16:32:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:03.228 16:32:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:03.228 16:32:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:03.228 16:32:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:03.228 16:32:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:03.228 16:32:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:03.228 16:32:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:03.228 16:32:01 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.228 16:32:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.228 16:32:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70113 00:06:03.228 16:32:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:03.228 16:32:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70113 00:06:03.228 16:32:01 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70113 ']' 00:06:03.228 16:32:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.228 16:32:01 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.228 16:32:01 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.228 16:32:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.228 16:32:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.228 [2024-12-07 16:32:01.968710] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:03.228 [2024-12-07 16:32:01.968904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70113 ] 00:06:03.488 [2024-12-07 16:32:02.127070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.488 [2024-12-07 16:32:02.173457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.488 [2024-12-07 16:32:02.173528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.057 16:32:02 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.057 16:32:02 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:04.057 16:32:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70130 00:06:04.057 16:32:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:04.057 16:32:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:04.317 [ 00:06:04.317 "bdev_malloc_delete", 00:06:04.317 "bdev_malloc_create", 00:06:04.317 "bdev_null_resize", 00:06:04.317 "bdev_null_delete", 00:06:04.317 "bdev_null_create", 00:06:04.317 "bdev_nvme_cuse_unregister", 00:06:04.317 "bdev_nvme_cuse_register", 00:06:04.317 "bdev_opal_new_user", 00:06:04.317 "bdev_opal_set_lock_state", 00:06:04.317 "bdev_opal_delete", 00:06:04.317 "bdev_opal_get_info", 00:06:04.317 "bdev_opal_create", 00:06:04.317 "bdev_nvme_opal_revert", 00:06:04.317 "bdev_nvme_opal_init", 00:06:04.317 "bdev_nvme_send_cmd", 00:06:04.317 "bdev_nvme_set_keys", 00:06:04.317 "bdev_nvme_get_path_iostat", 00:06:04.317 "bdev_nvme_get_mdns_discovery_info", 00:06:04.317 "bdev_nvme_stop_mdns_discovery", 00:06:04.317 "bdev_nvme_start_mdns_discovery", 00:06:04.317 "bdev_nvme_set_multipath_policy", 00:06:04.317 "bdev_nvme_set_preferred_path", 00:06:04.317 "bdev_nvme_get_io_paths", 00:06:04.317 "bdev_nvme_remove_error_injection", 00:06:04.317 "bdev_nvme_add_error_injection", 00:06:04.317 "bdev_nvme_get_discovery_info", 00:06:04.317 "bdev_nvme_stop_discovery", 00:06:04.317 "bdev_nvme_start_discovery", 00:06:04.317 "bdev_nvme_get_controller_health_info", 00:06:04.317 "bdev_nvme_disable_controller", 00:06:04.317 "bdev_nvme_enable_controller", 00:06:04.317 "bdev_nvme_reset_controller", 00:06:04.317 "bdev_nvme_get_transport_statistics", 00:06:04.317 "bdev_nvme_apply_firmware", 00:06:04.317 "bdev_nvme_detach_controller", 00:06:04.317 "bdev_nvme_get_controllers", 00:06:04.317 "bdev_nvme_attach_controller", 00:06:04.317 "bdev_nvme_set_hotplug", 00:06:04.317 "bdev_nvme_set_options", 00:06:04.317 "bdev_passthru_delete", 00:06:04.317 "bdev_passthru_create", 00:06:04.317 "bdev_lvol_set_parent_bdev", 00:06:04.317 "bdev_lvol_set_parent", 00:06:04.317 "bdev_lvol_check_shallow_copy", 00:06:04.317 "bdev_lvol_start_shallow_copy", 00:06:04.317 "bdev_lvol_grow_lvstore", 00:06:04.317 "bdev_lvol_get_lvols", 00:06:04.317 "bdev_lvol_get_lvstores", 00:06:04.317 "bdev_lvol_delete", 00:06:04.317 "bdev_lvol_set_read_only", 00:06:04.317 "bdev_lvol_resize", 00:06:04.317 "bdev_lvol_decouple_parent", 00:06:04.317 "bdev_lvol_inflate", 00:06:04.317 "bdev_lvol_rename", 00:06:04.317 "bdev_lvol_clone_bdev", 00:06:04.317 "bdev_lvol_clone", 00:06:04.317 "bdev_lvol_snapshot", 00:06:04.317 "bdev_lvol_create", 00:06:04.317 "bdev_lvol_delete_lvstore", 00:06:04.317 "bdev_lvol_rename_lvstore", 00:06:04.317 "bdev_lvol_create_lvstore", 00:06:04.317 "bdev_raid_set_options", 00:06:04.317 "bdev_raid_remove_base_bdev", 00:06:04.317 "bdev_raid_add_base_bdev", 00:06:04.317 "bdev_raid_delete", 00:06:04.317 "bdev_raid_create", 00:06:04.317 "bdev_raid_get_bdevs", 00:06:04.317 "bdev_error_inject_error", 00:06:04.317 "bdev_error_delete", 00:06:04.317 "bdev_error_create", 00:06:04.317 "bdev_split_delete", 00:06:04.317 "bdev_split_create", 00:06:04.317 "bdev_delay_delete", 00:06:04.317 "bdev_delay_create", 00:06:04.317 "bdev_delay_update_latency", 00:06:04.317 "bdev_zone_block_delete", 00:06:04.317 "bdev_zone_block_create", 00:06:04.317 "blobfs_create", 00:06:04.317 "blobfs_detect", 00:06:04.317 "blobfs_set_cache_size", 00:06:04.317 "bdev_aio_delete", 00:06:04.317 "bdev_aio_rescan", 00:06:04.317 "bdev_aio_create", 00:06:04.317 "bdev_ftl_set_property", 00:06:04.317 "bdev_ftl_get_properties", 00:06:04.317 "bdev_ftl_get_stats", 00:06:04.317 "bdev_ftl_unmap", 00:06:04.317 "bdev_ftl_unload", 00:06:04.317 "bdev_ftl_delete", 00:06:04.317 "bdev_ftl_load", 00:06:04.317 "bdev_ftl_create", 00:06:04.317 "bdev_virtio_attach_controller", 00:06:04.317 "bdev_virtio_scsi_get_devices", 00:06:04.317 "bdev_virtio_detach_controller", 00:06:04.317 "bdev_virtio_blk_set_hotplug", 00:06:04.317 "bdev_iscsi_delete", 00:06:04.317 "bdev_iscsi_create", 00:06:04.317 "bdev_iscsi_set_options", 00:06:04.317 "accel_error_inject_error", 00:06:04.317 "ioat_scan_accel_module", 00:06:04.317 "dsa_scan_accel_module", 00:06:04.317 "iaa_scan_accel_module", 00:06:04.317 "keyring_file_remove_key", 00:06:04.317 "keyring_file_add_key", 00:06:04.317 "keyring_linux_set_options", 00:06:04.317 "fsdev_aio_delete", 00:06:04.317 "fsdev_aio_create", 00:06:04.317 "iscsi_get_histogram", 00:06:04.317 "iscsi_enable_histogram", 00:06:04.317 "iscsi_set_options", 00:06:04.317 "iscsi_get_auth_groups", 00:06:04.317 "iscsi_auth_group_remove_secret", 00:06:04.317 "iscsi_auth_group_add_secret", 00:06:04.317 "iscsi_delete_auth_group", 00:06:04.317 "iscsi_create_auth_group", 00:06:04.317 "iscsi_set_discovery_auth", 00:06:04.317 "iscsi_get_options", 00:06:04.317 "iscsi_target_node_request_logout", 00:06:04.317 "iscsi_target_node_set_redirect", 00:06:04.317 "iscsi_target_node_set_auth", 00:06:04.317 "iscsi_target_node_add_lun", 00:06:04.317 "iscsi_get_stats", 00:06:04.317 "iscsi_get_connections", 00:06:04.317 "iscsi_portal_group_set_auth", 00:06:04.317 "iscsi_start_portal_group", 00:06:04.317 "iscsi_delete_portal_group", 00:06:04.317 "iscsi_create_portal_group", 00:06:04.317 "iscsi_get_portal_groups", 00:06:04.317 "iscsi_delete_target_node", 00:06:04.317 "iscsi_target_node_remove_pg_ig_maps", 00:06:04.317 "iscsi_target_node_add_pg_ig_maps", 00:06:04.317 "iscsi_create_target_node", 00:06:04.317 "iscsi_get_target_nodes", 00:06:04.318 "iscsi_delete_initiator_group", 00:06:04.318 "iscsi_initiator_group_remove_initiators", 00:06:04.318 "iscsi_initiator_group_add_initiators", 00:06:04.318 "iscsi_create_initiator_group", 00:06:04.318 "iscsi_get_initiator_groups", 00:06:04.318 "nvmf_set_crdt", 00:06:04.318 "nvmf_set_config", 00:06:04.318 "nvmf_set_max_subsystems", 00:06:04.318 "nvmf_stop_mdns_prr", 00:06:04.318 "nvmf_publish_mdns_prr", 00:06:04.318 "nvmf_subsystem_get_listeners", 00:06:04.318 "nvmf_subsystem_get_qpairs", 00:06:04.318 "nvmf_subsystem_get_controllers", 00:06:04.318 "nvmf_get_stats", 00:06:04.318 "nvmf_get_transports", 00:06:04.318 "nvmf_create_transport", 00:06:04.318 "nvmf_get_targets", 00:06:04.318 "nvmf_delete_target", 00:06:04.318 "nvmf_create_target", 00:06:04.318 "nvmf_subsystem_allow_any_host", 00:06:04.318 "nvmf_subsystem_set_keys", 00:06:04.318 "nvmf_subsystem_remove_host", 00:06:04.318 "nvmf_subsystem_add_host", 00:06:04.318 "nvmf_ns_remove_host", 00:06:04.318 "nvmf_ns_add_host", 00:06:04.318 "nvmf_subsystem_remove_ns", 00:06:04.318 "nvmf_subsystem_set_ns_ana_group", 00:06:04.318 "nvmf_subsystem_add_ns", 00:06:04.318 "nvmf_subsystem_listener_set_ana_state", 00:06:04.318 "nvmf_discovery_get_referrals", 00:06:04.318 "nvmf_discovery_remove_referral", 00:06:04.318 "nvmf_discovery_add_referral", 00:06:04.318 "nvmf_subsystem_remove_listener", 00:06:04.318 "nvmf_subsystem_add_listener", 00:06:04.318 "nvmf_delete_subsystem", 00:06:04.318 "nvmf_create_subsystem", 00:06:04.318 "nvmf_get_subsystems", 00:06:04.318 "env_dpdk_get_mem_stats", 00:06:04.318 "nbd_get_disks", 00:06:04.318 "nbd_stop_disk", 00:06:04.318 "nbd_start_disk", 00:06:04.318 "ublk_recover_disk", 00:06:04.318 "ublk_get_disks", 00:06:04.318 "ublk_stop_disk", 00:06:04.318 "ublk_start_disk", 00:06:04.318 "ublk_destroy_target", 00:06:04.318 "ublk_create_target", 00:06:04.318 "virtio_blk_create_transport", 00:06:04.318 "virtio_blk_get_transports", 00:06:04.318 "vhost_controller_set_coalescing", 00:06:04.318 "vhost_get_controllers", 00:06:04.318 "vhost_delete_controller", 00:06:04.318 "vhost_create_blk_controller", 00:06:04.318 "vhost_scsi_controller_remove_target", 00:06:04.318 "vhost_scsi_controller_add_target", 00:06:04.318 "vhost_start_scsi_controller", 00:06:04.318 "vhost_create_scsi_controller", 00:06:04.318 "thread_set_cpumask", 00:06:04.318 "scheduler_set_options", 00:06:04.318 "framework_get_governor", 00:06:04.318 "framework_get_scheduler", 00:06:04.318 "framework_set_scheduler", 00:06:04.318 "framework_get_reactors", 00:06:04.318 "thread_get_io_channels", 00:06:04.318 "thread_get_pollers", 00:06:04.318 "thread_get_stats", 00:06:04.318 "framework_monitor_context_switch", 00:06:04.318 "spdk_kill_instance", 00:06:04.318 "log_enable_timestamps", 00:06:04.318 "log_get_flags", 00:06:04.318 "log_clear_flag", 00:06:04.318 "log_set_flag", 00:06:04.318 "log_get_level", 00:06:04.318 "log_set_level", 00:06:04.318 "log_get_print_level", 00:06:04.318 "log_set_print_level", 00:06:04.318 "framework_enable_cpumask_locks", 00:06:04.318 "framework_disable_cpumask_locks", 00:06:04.318 "framework_wait_init", 00:06:04.318 "framework_start_init", 00:06:04.318 "scsi_get_devices", 00:06:04.318 "bdev_get_histogram", 00:06:04.318 "bdev_enable_histogram", 00:06:04.318 "bdev_set_qos_limit", 00:06:04.318 "bdev_set_qd_sampling_period", 00:06:04.318 "bdev_get_bdevs", 00:06:04.318 "bdev_reset_iostat", 00:06:04.318 "bdev_get_iostat", 00:06:04.318 "bdev_examine", 00:06:04.318 "bdev_wait_for_examine", 00:06:04.318 "bdev_set_options", 00:06:04.318 "accel_get_stats", 00:06:04.318 "accel_set_options", 00:06:04.318 "accel_set_driver", 00:06:04.318 "accel_crypto_key_destroy", 00:06:04.318 "accel_crypto_keys_get", 00:06:04.318 "accel_crypto_key_create", 00:06:04.318 "accel_assign_opc", 00:06:04.318 "accel_get_module_info", 00:06:04.318 "accel_get_opc_assignments", 00:06:04.318 "vmd_rescan", 00:06:04.318 "vmd_remove_device", 00:06:04.318 "vmd_enable", 00:06:04.318 "sock_get_default_impl", 00:06:04.318 "sock_set_default_impl", 00:06:04.318 "sock_impl_set_options", 00:06:04.318 "sock_impl_get_options", 00:06:04.318 "iobuf_get_stats", 00:06:04.318 "iobuf_set_options", 00:06:04.318 "keyring_get_keys", 00:06:04.318 "framework_get_pci_devices", 00:06:04.318 "framework_get_config", 00:06:04.318 "framework_get_subsystems", 00:06:04.318 "fsdev_set_opts", 00:06:04.318 "fsdev_get_opts", 00:06:04.318 "trace_get_info", 00:06:04.318 "trace_get_tpoint_group_mask", 00:06:04.318 "trace_disable_tpoint_group", 00:06:04.318 "trace_enable_tpoint_group", 00:06:04.318 "trace_clear_tpoint_mask", 00:06:04.318 "trace_set_tpoint_mask", 00:06:04.318 "notify_get_notifications", 00:06:04.318 "notify_get_types", 00:06:04.318 "spdk_get_version", 00:06:04.318 "rpc_get_methods" 00:06:04.318 ] 00:06:04.318 16:32:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:04.318 16:32:02 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.318 16:32:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.318 16:32:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:04.318 16:32:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70113 00:06:04.318 16:32:03 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70113 ']' 00:06:04.318 16:32:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70113 00:06:04.318 16:32:03 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:04.318 16:32:03 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.318 16:32:03 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70113 00:06:04.318 16:32:03 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.318 16:32:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.318 16:32:03 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70113' 00:06:04.318 killing process with pid 70113 00:06:04.318 16:32:03 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70113 00:06:04.318 16:32:03 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70113 00:06:04.578 ************************************ 00:06:04.578 END TEST spdkcli_tcp 00:06:04.578 ************************************ 00:06:04.578 00:06:04.578 real 0m1.820s 00:06:04.578 user 0m2.948s 00:06:04.578 sys 0m0.605s 00:06:04.578 16:32:03 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.578 16:32:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.838 16:32:03 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:04.838 16:32:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.838 16:32:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.838 16:32:03 -- common/autotest_common.sh@10 -- # set +x 00:06:04.838 ************************************ 00:06:04.838 START TEST dpdk_mem_utility 00:06:04.838 ************************************ 00:06:04.838 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:04.838 * Looking for test storage... 00:06:04.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:04.838 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.838 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.838 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.097 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.097 16:32:03 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:05.097 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.097 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:05.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.097 --rc genhtml_branch_coverage=1 00:06:05.097 --rc genhtml_function_coverage=1 00:06:05.097 --rc genhtml_legend=1 00:06:05.097 --rc geninfo_all_blocks=1 00:06:05.097 --rc geninfo_unexecuted_blocks=1 00:06:05.097 00:06:05.097 ' 00:06:05.097 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:05.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.097 --rc genhtml_branch_coverage=1 00:06:05.097 --rc genhtml_function_coverage=1 00:06:05.097 --rc genhtml_legend=1 00:06:05.097 --rc geninfo_all_blocks=1 00:06:05.097 --rc geninfo_unexecuted_blocks=1 00:06:05.097 00:06:05.097 ' 00:06:05.097 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:05.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.097 --rc genhtml_branch_coverage=1 00:06:05.097 --rc genhtml_function_coverage=1 00:06:05.097 --rc genhtml_legend=1 00:06:05.097 --rc geninfo_all_blocks=1 00:06:05.097 --rc geninfo_unexecuted_blocks=1 00:06:05.097 00:06:05.097 ' 00:06:05.097 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:05.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.097 --rc genhtml_branch_coverage=1 00:06:05.097 --rc genhtml_function_coverage=1 00:06:05.097 --rc genhtml_legend=1 00:06:05.097 --rc geninfo_all_blocks=1 00:06:05.097 --rc geninfo_unexecuted_blocks=1 00:06:05.097 00:06:05.097 ' 00:06:05.097 16:32:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:05.097 16:32:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70213 00:06:05.098 16:32:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.098 16:32:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70213 00:06:05.098 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70213 ']' 00:06:05.098 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.098 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.098 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.098 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.098 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.098 [2024-12-07 16:32:03.857541] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:05.098 [2024-12-07 16:32:03.857793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70213 ] 00:06:05.358 [2024-12-07 16:32:04.022590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.358 [2024-12-07 16:32:04.068329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.929 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.929 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:05.929 16:32:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:05.929 16:32:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:05.929 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.929 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.929 { 00:06:05.929 "filename": "/tmp/spdk_mem_dump.txt" 00:06:05.929 } 00:06:05.929 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.929 16:32:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:05.929 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:05.929 1 heaps totaling size 860.000000 MiB 00:06:05.929 size: 860.000000 MiB heap id: 0 00:06:05.929 end heaps---------- 00:06:05.929 9 mempools totaling size 642.649841 MiB 00:06:05.929 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:05.929 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:05.929 size: 92.545471 MiB name: bdev_io_70213 00:06:05.929 size: 51.011292 MiB name: evtpool_70213 00:06:05.929 size: 50.003479 MiB name: msgpool_70213 00:06:05.929 size: 36.509338 MiB name: fsdev_io_70213 00:06:05.929 size: 21.763794 MiB name: PDU_Pool 00:06:05.929 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:05.929 size: 0.026123 MiB name: Session_Pool 00:06:05.929 end mempools------- 00:06:05.929 6 memzones totaling size 4.142822 MiB 00:06:05.929 size: 1.000366 MiB name: RG_ring_0_70213 00:06:05.929 size: 1.000366 MiB name: RG_ring_1_70213 00:06:05.929 size: 1.000366 MiB name: RG_ring_4_70213 00:06:05.929 size: 1.000366 MiB name: RG_ring_5_70213 00:06:05.929 size: 0.125366 MiB name: RG_ring_2_70213 00:06:05.929 size: 0.015991 MiB name: RG_ring_3_70213 00:06:05.929 end memzones------- 00:06:05.929 16:32:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:05.930 heap id: 0 total size: 860.000000 MiB number of busy elements: 303 number of free elements: 16 00:06:05.930 list of free elements. size: 13.937256 MiB 00:06:05.930 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:05.930 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:05.930 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:05.930 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:05.930 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:05.930 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:05.930 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:05.930 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:05.930 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:05.930 element at address: 0x20001d800000 with size: 0.568420 MiB 00:06:05.930 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:05.930 element at address: 0x200003e00000 with size: 0.488464 MiB 00:06:05.930 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:05.930 element at address: 0x200007000000 with size: 0.480469 MiB 00:06:05.930 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:05.930 element at address: 0x200003a00000 with size: 0.353027 MiB 00:06:05.930 list of standard malloc elements. size: 199.266052 MiB 00:06:05.930 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:05.930 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:05.930 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:05.930 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:05.930 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:05.930 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:05.930 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:05.930 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:05.930 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:05.930 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:05.930 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:05.930 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:05.930 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:05.931 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:05.932 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:05.932 list of memzone associated elements. size: 646.796692 MiB 00:06:05.932 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:05.932 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:05.932 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:05.932 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:05.932 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:05.932 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70213_0 00:06:05.932 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:05.932 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70213_0 00:06:05.932 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:05.932 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70213_0 00:06:05.932 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:05.932 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70213_0 00:06:05.932 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:05.932 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:05.932 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:05.932 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:05.932 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:05.932 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70213 00:06:05.932 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:05.932 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70213 00:06:05.932 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:05.932 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70213 00:06:05.932 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:05.932 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:05.932 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:05.932 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:05.932 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:05.932 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:05.932 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:05.932 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:05.932 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:05.932 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70213 00:06:05.932 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:05.932 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70213 00:06:05.932 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:05.932 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70213 00:06:05.932 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:05.932 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70213 00:06:05.932 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:05.932 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70213 00:06:05.932 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:05.932 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70213 00:06:05.932 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:05.932 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:05.932 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:05.932 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:05.932 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:05.932 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:05.932 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:06:05.932 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70213 00:06:05.932 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:05.932 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:05.932 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:05.932 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:05.932 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:06:05.932 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70213 00:06:05.932 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:05.932 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:05.932 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:05.932 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70213 00:06:05.932 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:05.932 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70213 00:06:05.932 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:06:05.932 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70213 00:06:05.932 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:05.932 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:05.932 16:32:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:05.932 16:32:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70213 00:06:05.932 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70213 ']' 00:06:05.932 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70213 00:06:05.932 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:05.932 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.932 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70213 00:06:06.192 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.192 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.192 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70213' 00:06:06.192 killing process with pid 70213 00:06:06.192 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70213 00:06:06.192 16:32:04 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70213 00:06:06.450 00:06:06.450 real 0m1.694s 00:06:06.450 user 0m1.597s 00:06:06.450 sys 0m0.538s 00:06:06.450 16:32:05 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.450 16:32:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.450 ************************************ 00:06:06.450 END TEST dpdk_mem_utility 00:06:06.450 ************************************ 00:06:06.450 16:32:05 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:06.450 16:32:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.450 16:32:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.450 16:32:05 -- common/autotest_common.sh@10 -- # set +x 00:06:06.450 ************************************ 00:06:06.450 START TEST event 00:06:06.450 ************************************ 00:06:06.450 16:32:05 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:06.708 * Looking for test storage... 00:06:06.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:06.708 16:32:05 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.708 16:32:05 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.708 16:32:05 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.708 16:32:05 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.708 16:32:05 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.708 16:32:05 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.708 16:32:05 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.708 16:32:05 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.708 16:32:05 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.708 16:32:05 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.708 16:32:05 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.708 16:32:05 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.708 16:32:05 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.708 16:32:05 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.708 16:32:05 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.708 16:32:05 event -- scripts/common.sh@344 -- # case "$op" in 00:06:06.708 16:32:05 event -- scripts/common.sh@345 -- # : 1 00:06:06.708 16:32:05 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.708 16:32:05 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.708 16:32:05 event -- scripts/common.sh@365 -- # decimal 1 00:06:06.708 16:32:05 event -- scripts/common.sh@353 -- # local d=1 00:06:06.708 16:32:05 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.708 16:32:05 event -- scripts/common.sh@355 -- # echo 1 00:06:06.708 16:32:05 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.708 16:32:05 event -- scripts/common.sh@366 -- # decimal 2 00:06:06.708 16:32:05 event -- scripts/common.sh@353 -- # local d=2 00:06:06.708 16:32:05 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.708 16:32:05 event -- scripts/common.sh@355 -- # echo 2 00:06:06.708 16:32:05 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.708 16:32:05 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.708 16:32:05 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.708 16:32:05 event -- scripts/common.sh@368 -- # return 0 00:06:06.709 16:32:05 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.709 16:32:05 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.709 --rc genhtml_branch_coverage=1 00:06:06.709 --rc genhtml_function_coverage=1 00:06:06.709 --rc genhtml_legend=1 00:06:06.709 --rc geninfo_all_blocks=1 00:06:06.709 --rc geninfo_unexecuted_blocks=1 00:06:06.709 00:06:06.709 ' 00:06:06.709 16:32:05 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.709 --rc genhtml_branch_coverage=1 00:06:06.709 --rc genhtml_function_coverage=1 00:06:06.709 --rc genhtml_legend=1 00:06:06.709 --rc geninfo_all_blocks=1 00:06:06.709 --rc geninfo_unexecuted_blocks=1 00:06:06.709 00:06:06.709 ' 00:06:06.709 16:32:05 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.709 --rc genhtml_branch_coverage=1 00:06:06.709 --rc genhtml_function_coverage=1 00:06:06.709 --rc genhtml_legend=1 00:06:06.709 --rc geninfo_all_blocks=1 00:06:06.709 --rc geninfo_unexecuted_blocks=1 00:06:06.709 00:06:06.709 ' 00:06:06.709 16:32:05 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.709 --rc genhtml_branch_coverage=1 00:06:06.709 --rc genhtml_function_coverage=1 00:06:06.709 --rc genhtml_legend=1 00:06:06.709 --rc geninfo_all_blocks=1 00:06:06.709 --rc geninfo_unexecuted_blocks=1 00:06:06.709 00:06:06.709 ' 00:06:06.709 16:32:05 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:06.709 16:32:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.709 16:32:05 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.709 16:32:05 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:06.709 16:32:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.709 16:32:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.709 ************************************ 00:06:06.709 START TEST event_perf 00:06:06.709 ************************************ 00:06:06.709 16:32:05 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.709 Running I/O for 1 seconds...[2024-12-07 16:32:05.574949] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:06.709 [2024-12-07 16:32:05.575124] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70299 ] 00:06:06.968 [2024-12-07 16:32:05.737364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.968 [2024-12-07 16:32:05.783759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.968 [2024-12-07 16:32:05.783975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.968 Running I/O for 1 seconds...[2024-12-07 16:32:05.784048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.968 [2024-12-07 16:32:05.784177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.396 00:06:08.396 lcore 0: 99510 00:06:08.396 lcore 1: 99508 00:06:08.396 lcore 2: 99511 00:06:08.396 lcore 3: 99509 00:06:08.396 done. 00:06:08.396 00:06:08.396 real 0m1.349s 00:06:08.396 user 0m4.111s 00:06:08.396 sys 0m0.117s 00:06:08.396 ************************************ 00:06:08.396 END TEST event_perf 00:06:08.396 ************************************ 00:06:08.396 16:32:06 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.396 16:32:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.396 16:32:06 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:08.396 16:32:06 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:08.396 16:32:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.396 16:32:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.396 ************************************ 00:06:08.396 START TEST event_reactor 00:06:08.396 ************************************ 00:06:08.396 16:32:06 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:08.396 [2024-12-07 16:32:06.993895] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:08.396 [2024-12-07 16:32:06.994119] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70333 ] 00:06:08.396 [2024-12-07 16:32:07.153751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.396 [2024-12-07 16:32:07.202668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.776 test_start 00:06:09.776 oneshot 00:06:09.776 tick 100 00:06:09.776 tick 100 00:06:09.776 tick 250 00:06:09.776 tick 100 00:06:09.776 tick 100 00:06:09.776 tick 100 00:06:09.776 tick 250 00:06:09.776 tick 500 00:06:09.776 tick 100 00:06:09.776 tick 100 00:06:09.776 tick 250 00:06:09.776 tick 100 00:06:09.776 tick 100 00:06:09.776 test_end 00:06:09.776 00:06:09.776 real 0m1.344s 00:06:09.776 user 0m1.141s 00:06:09.776 sys 0m0.094s 00:06:09.776 16:32:08 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.776 16:32:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:09.776 ************************************ 00:06:09.776 END TEST event_reactor 00:06:09.776 ************************************ 00:06:09.776 16:32:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.776 16:32:08 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:09.776 16:32:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.776 16:32:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.776 ************************************ 00:06:09.776 START TEST event_reactor_perf 00:06:09.776 ************************************ 00:06:09.776 16:32:08 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.776 [2024-12-07 16:32:08.409239] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:09.776 [2024-12-07 16:32:08.409374] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70364 ] 00:06:09.776 [2024-12-07 16:32:08.566647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.776 [2024-12-07 16:32:08.613661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.158 test_start 00:06:11.158 test_end 00:06:11.158 Performance: 411906 events per second 00:06:11.158 00:06:11.158 real 0m1.338s 00:06:11.158 user 0m1.126s 00:06:11.158 sys 0m0.104s 00:06:11.158 ************************************ 00:06:11.158 END TEST event_reactor_perf 00:06:11.158 ************************************ 00:06:11.158 16:32:09 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.158 16:32:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.158 16:32:09 event -- event/event.sh@49 -- # uname -s 00:06:11.158 16:32:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:11.158 16:32:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:11.158 16:32:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.158 16:32:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.158 16:32:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.158 ************************************ 00:06:11.158 START TEST event_scheduler 00:06:11.158 ************************************ 00:06:11.158 16:32:09 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:11.158 * Looking for test storage... 00:06:11.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:11.158 16:32:09 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:11.158 16:32:09 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:11.158 16:32:09 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:11.158 16:32:09 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:11.158 16:32:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.158 16:32:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:11.158 16:32:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.158 16:32:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.158 16:32:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.158 16:32:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:11.158 16:32:10 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.158 16:32:10 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:11.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.158 --rc genhtml_branch_coverage=1 00:06:11.158 --rc genhtml_function_coverage=1 00:06:11.158 --rc genhtml_legend=1 00:06:11.158 --rc geninfo_all_blocks=1 00:06:11.158 --rc geninfo_unexecuted_blocks=1 00:06:11.158 00:06:11.158 ' 00:06:11.158 16:32:10 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:11.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.158 --rc genhtml_branch_coverage=1 00:06:11.158 --rc genhtml_function_coverage=1 00:06:11.158 --rc genhtml_legend=1 00:06:11.158 --rc geninfo_all_blocks=1 00:06:11.158 --rc geninfo_unexecuted_blocks=1 00:06:11.158 00:06:11.158 ' 00:06:11.158 16:32:10 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:11.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.158 --rc genhtml_branch_coverage=1 00:06:11.158 --rc genhtml_function_coverage=1 00:06:11.158 --rc genhtml_legend=1 00:06:11.158 --rc geninfo_all_blocks=1 00:06:11.158 --rc geninfo_unexecuted_blocks=1 00:06:11.158 00:06:11.158 ' 00:06:11.158 16:32:10 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:11.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.158 --rc genhtml_branch_coverage=1 00:06:11.158 --rc genhtml_function_coverage=1 00:06:11.158 --rc genhtml_legend=1 00:06:11.158 --rc geninfo_all_blocks=1 00:06:11.158 --rc geninfo_unexecuted_blocks=1 00:06:11.158 00:06:11.158 ' 00:06:11.158 16:32:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:11.158 16:32:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70440 00:06:11.158 16:32:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:11.158 16:32:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.158 16:32:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70440 00:06:11.158 16:32:10 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70440 ']' 00:06:11.158 16:32:10 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.158 16:32:10 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.158 16:32:10 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.158 16:32:10 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.158 16:32:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.418 [2024-12-07 16:32:10.089569] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:11.418 [2024-12-07 16:32:10.089697] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70440 ] 00:06:11.418 [2024-12-07 16:32:10.250853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.418 [2024-12-07 16:32:10.297868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.418 [2024-12-07 16:32:10.298101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.418 [2024-12-07 16:32:10.298230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.418 [2024-12-07 16:32:10.298142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.357 16:32:10 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.357 16:32:10 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:12.357 16:32:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:12.357 16:32:10 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.357 16:32:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:12.357 POWER: Cannot set governor of lcore 0 to userspace 00:06:12.357 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:12.357 POWER: Cannot set governor of lcore 0 to performance 00:06:12.357 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:12.357 POWER: Cannot set governor of lcore 0 to userspace 00:06:12.357 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:12.357 POWER: Cannot set governor of lcore 0 to userspace 00:06:12.357 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:12.357 POWER: Unable to set Power Management Environment for lcore 0 00:06:12.357 [2024-12-07 16:32:10.903207] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:12.357 [2024-12-07 16:32:10.903236] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:12.357 [2024-12-07 16:32:10.903251] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:12.357 [2024-12-07 16:32:10.903267] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:12.357 [2024-12-07 16:32:10.903275] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:12.357 [2024-12-07 16:32:10.903284] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:12.357 16:32:10 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.357 16:32:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:12.357 16:32:10 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.357 16:32:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 [2024-12-07 16:32:10.972535] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:12.357 16:32:10 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.357 16:32:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:12.357 16:32:10 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.357 16:32:10 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.357 16:32:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 ************************************ 00:06:12.357 START TEST scheduler_create_thread 00:06:12.357 ************************************ 00:06:12.357 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:12.357 16:32:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:12.357 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.357 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 2 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 3 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 4 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 5 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 6 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 7 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 8 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.357 9 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.357 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.736 10 00:06:13.736 16:32:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.736 16:32:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:13.736 16:32:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.736 16:32:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.116 16:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.116 16:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:15.116 16:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:15.116 16:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.116 16:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.684 16:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.684 16:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:15.684 16:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.684 16:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.249 16:32:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.249 16:32:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:16.249 16:32:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:16.249 16:32:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.249 16:32:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.184 16:32:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.184 ************************************ 00:06:17.184 END TEST scheduler_create_thread 00:06:17.184 ************************************ 00:06:17.184 00:06:17.184 real 0m4.879s 00:06:17.184 user 0m0.025s 00:06:17.184 sys 0m0.012s 00:06:17.184 16:32:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.184 16:32:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.184 16:32:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:17.184 16:32:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70440 00:06:17.184 16:32:15 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70440 ']' 00:06:17.184 16:32:15 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70440 00:06:17.184 16:32:15 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:17.184 16:32:15 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.184 16:32:15 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70440 00:06:17.184 killing process with pid 70440 00:06:17.184 16:32:15 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:17.184 16:32:15 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:17.184 16:32:15 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70440' 00:06:17.184 16:32:15 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70440 00:06:17.184 16:32:15 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70440 00:06:17.465 [2024-12-07 16:32:16.242390] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:17.725 ************************************ 00:06:17.725 END TEST event_scheduler 00:06:17.725 ************************************ 00:06:17.725 00:06:17.725 real 0m6.731s 00:06:17.725 user 0m15.463s 00:06:17.725 sys 0m0.494s 00:06:17.725 16:32:16 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.725 16:32:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.725 16:32:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:17.725 16:32:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:17.725 16:32:16 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.725 16:32:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.725 16:32:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.726 ************************************ 00:06:17.726 START TEST app_repeat 00:06:17.726 ************************************ 00:06:17.726 16:32:16 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70557 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.726 Process app_repeat pid: 70557 00:06:17.726 spdk_app_start Round 0 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70557' 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:17.726 16:32:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70557 /var/tmp/spdk-nbd.sock 00:06:17.726 16:32:16 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70557 ']' 00:06:17.726 16:32:16 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.726 16:32:16 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.726 16:32:16 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.726 16:32:16 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.726 16:32:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.984 [2024-12-07 16:32:16.653718] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:17.984 [2024-12-07 16:32:16.653890] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70557 ] 00:06:17.984 [2024-12-07 16:32:16.814702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.984 [2024-12-07 16:32:16.861873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.984 [2024-12-07 16:32:16.861976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.920 16:32:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.920 16:32:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:18.920 16:32:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.920 Malloc0 00:06:18.920 16:32:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.179 Malloc1 00:06:19.179 16:32:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.179 16:32:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.438 /dev/nbd0 00:06:19.438 16:32:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.438 16:32:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.438 1+0 records in 00:06:19.438 1+0 records out 00:06:19.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478922 s, 8.6 MB/s 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.438 16:32:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:19.438 16:32:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.438 16:32:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.438 16:32:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.698 /dev/nbd1 00:06:19.698 16:32:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.698 16:32:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.698 1+0 records in 00:06:19.698 1+0 records out 00:06:19.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345928 s, 11.8 MB/s 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.698 16:32:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:19.698 16:32:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.698 16:32:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.698 16:32:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.698 16:32:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.698 16:32:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.957 { 00:06:19.957 "nbd_device": "/dev/nbd0", 00:06:19.957 "bdev_name": "Malloc0" 00:06:19.957 }, 00:06:19.957 { 00:06:19.957 "nbd_device": "/dev/nbd1", 00:06:19.957 "bdev_name": "Malloc1" 00:06:19.957 } 00:06:19.957 ]' 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.957 { 00:06:19.957 "nbd_device": "/dev/nbd0", 00:06:19.957 "bdev_name": "Malloc0" 00:06:19.957 }, 00:06:19.957 { 00:06:19.957 "nbd_device": "/dev/nbd1", 00:06:19.957 "bdev_name": "Malloc1" 00:06:19.957 } 00:06:19.957 ]' 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.957 /dev/nbd1' 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.957 /dev/nbd1' 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.957 256+0 records in 00:06:19.957 256+0 records out 00:06:19.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138381 s, 75.8 MB/s 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.957 16:32:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.957 256+0 records in 00:06:19.957 256+0 records out 00:06:19.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260163 s, 40.3 MB/s 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.958 256+0 records in 00:06:19.958 256+0 records out 00:06:19.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0337266 s, 31.1 MB/s 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.958 16:32:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.217 16:32:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.217 16:32:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.217 16:32:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.217 16:32:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.217 16:32:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.217 16:32:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.217 16:32:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.217 16:32:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.217 16:32:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.217 16:32:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.477 16:32:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.477 16:32:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.477 16:32:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.477 16:32:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.477 16:32:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.477 16:32:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.477 16:32:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.477 16:32:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.477 16:32:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.477 16:32:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.477 16:32:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.737 16:32:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.737 16:32:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.737 16:32:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.737 16:32:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.737 16:32:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.737 16:32:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.737 16:32:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.737 16:32:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.737 16:32:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.737 16:32:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.737 16:32:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.737 16:32:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.737 16:32:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.997 16:32:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.997 [2024-12-07 16:32:19.861462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.256 [2024-12-07 16:32:19.902461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.256 [2024-12-07 16:32:19.902466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.256 [2024-12-07 16:32:19.945104] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.256 [2024-12-07 16:32:19.945285] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.547 spdk_app_start Round 1 00:06:24.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.547 16:32:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.547 16:32:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:24.547 16:32:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70557 /var/tmp/spdk-nbd.sock 00:06:24.547 16:32:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70557 ']' 00:06:24.547 16:32:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.547 16:32:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.547 16:32:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.547 16:32:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.547 16:32:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.547 16:32:22 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.547 16:32:22 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:24.547 16:32:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.547 Malloc0 00:06:24.547 16:32:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.547 Malloc1 00:06:24.547 16:32:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.547 16:32:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.807 /dev/nbd0 00:06:24.807 16:32:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.807 16:32:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.807 1+0 records in 00:06:24.807 1+0 records out 00:06:24.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364739 s, 11.2 MB/s 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:24.807 16:32:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:24.807 16:32:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.807 16:32:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.807 16:32:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.067 /dev/nbd1 00:06:25.067 16:32:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.067 16:32:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.067 1+0 records in 00:06:25.067 1+0 records out 00:06:25.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369363 s, 11.1 MB/s 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:25.067 16:32:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:25.067 16:32:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.067 16:32:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.067 16:32:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.067 16:32:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.067 16:32:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.326 { 00:06:25.326 "nbd_device": "/dev/nbd0", 00:06:25.326 "bdev_name": "Malloc0" 00:06:25.326 }, 00:06:25.326 { 00:06:25.326 "nbd_device": "/dev/nbd1", 00:06:25.326 "bdev_name": "Malloc1" 00:06:25.326 } 00:06:25.326 ]' 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.326 { 00:06:25.326 "nbd_device": "/dev/nbd0", 00:06:25.326 "bdev_name": "Malloc0" 00:06:25.326 }, 00:06:25.326 { 00:06:25.326 "nbd_device": "/dev/nbd1", 00:06:25.326 "bdev_name": "Malloc1" 00:06:25.326 } 00:06:25.326 ]' 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.326 /dev/nbd1' 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.326 /dev/nbd1' 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.326 256+0 records in 00:06:25.326 256+0 records out 00:06:25.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00646371 s, 162 MB/s 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.326 256+0 records in 00:06:25.326 256+0 records out 00:06:25.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258972 s, 40.5 MB/s 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.326 16:32:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.326 256+0 records in 00:06:25.326 256+0 records out 00:06:25.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245369 s, 42.7 MB/s 00:06:25.327 16:32:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.327 16:32:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.327 16:32:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.327 16:32:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.327 16:32:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.327 16:32:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.327 16:32:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.327 16:32:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.327 16:32:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.586 16:32:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.846 16:32:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.846 16:32:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.846 16:32:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.846 16:32:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.846 16:32:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.846 16:32:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.846 16:32:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.846 16:32:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.846 16:32:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.846 16:32:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.846 16:32:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.106 16:32:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.106 16:32:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.106 16:32:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.106 16:32:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.106 16:32:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.106 16:32:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.106 16:32:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.106 16:32:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.106 16:32:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.106 16:32:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.106 16:32:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.106 16:32:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.106 16:32:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.371 16:32:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.631 [2024-12-07 16:32:25.289701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.631 [2024-12-07 16:32:25.334371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.631 [2024-12-07 16:32:25.334424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.631 [2024-12-07 16:32:25.377822] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.631 [2024-12-07 16:32:25.377878] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.958 16:32:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.958 spdk_app_start Round 2 00:06:29.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.958 16:32:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:29.958 16:32:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70557 /var/tmp/spdk-nbd.sock 00:06:29.958 16:32:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70557 ']' 00:06:29.958 16:32:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.958 16:32:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.958 16:32:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.958 16:32:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.958 16:32:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.958 16:32:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.958 16:32:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:29.958 16:32:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.958 Malloc0 00:06:29.958 16:32:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.958 Malloc1 00:06:29.958 16:32:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.958 16:32:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.958 16:32:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.959 16:32:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.959 16:32:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.959 16:32:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.959 16:32:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.959 16:32:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.959 16:32:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.959 16:32:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.959 16:32:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.959 16:32:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.959 16:32:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.959 16:32:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.959 16:32:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.959 16:32:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.219 /dev/nbd0 00:06:30.219 16:32:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.219 16:32:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.219 16:32:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:30.219 16:32:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:30.219 16:32:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:30.219 16:32:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:30.219 16:32:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:30.219 16:32:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:30.219 16:32:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:30.219 16:32:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:30.219 16:32:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.219 1+0 records in 00:06:30.219 1+0 records out 00:06:30.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382136 s, 10.7 MB/s 00:06:30.219 16:32:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.219 16:32:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:30.219 16:32:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.219 16:32:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:30.219 16:32:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:30.219 16:32:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.219 16:32:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.219 16:32:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.478 /dev/nbd1 00:06:30.479 16:32:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.479 16:32:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.479 1+0 records in 00:06:30.479 1+0 records out 00:06:30.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042486 s, 9.6 MB/s 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:30.479 16:32:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:30.479 16:32:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.479 16:32:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.479 16:32:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.479 16:32:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.479 16:32:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.738 16:32:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.738 { 00:06:30.738 "nbd_device": "/dev/nbd0", 00:06:30.738 "bdev_name": "Malloc0" 00:06:30.738 }, 00:06:30.738 { 00:06:30.738 "nbd_device": "/dev/nbd1", 00:06:30.738 "bdev_name": "Malloc1" 00:06:30.739 } 00:06:30.739 ]' 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.739 { 00:06:30.739 "nbd_device": "/dev/nbd0", 00:06:30.739 "bdev_name": "Malloc0" 00:06:30.739 }, 00:06:30.739 { 00:06:30.739 "nbd_device": "/dev/nbd1", 00:06:30.739 "bdev_name": "Malloc1" 00:06:30.739 } 00:06:30.739 ]' 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.739 /dev/nbd1' 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.739 /dev/nbd1' 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.739 256+0 records in 00:06:30.739 256+0 records out 00:06:30.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141196 s, 74.3 MB/s 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.739 256+0 records in 00:06:30.739 256+0 records out 00:06:30.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260548 s, 40.2 MB/s 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.739 256+0 records in 00:06:30.739 256+0 records out 00:06:30.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025631 s, 40.9 MB/s 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.739 16:32:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.998 16:32:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.998 16:32:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.998 16:32:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.998 16:32:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.998 16:32:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.998 16:32:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.998 16:32:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.998 16:32:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.998 16:32:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.998 16:32:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.257 16:32:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.257 16:32:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.257 16:32:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.257 16:32:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.257 16:32:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.257 16:32:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.257 16:32:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.257 16:32:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.257 16:32:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.257 16:32:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.257 16:32:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.517 16:32:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.517 16:32:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.517 16:32:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.517 16:32:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.517 16:32:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.517 16:32:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.517 16:32:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.517 16:32:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.517 16:32:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.517 16:32:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.517 16:32:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.517 16:32:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.517 16:32:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.776 16:32:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.036 [2024-12-07 16:32:30.685238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.036 [2024-12-07 16:32:30.729513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.036 [2024-12-07 16:32:30.729519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.036 [2024-12-07 16:32:30.772594] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.036 [2024-12-07 16:32:30.772765] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.330 16:32:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70557 /var/tmp/spdk-nbd.sock 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70557 ']' 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:35.330 16:32:33 event.app_repeat -- event/event.sh@39 -- # killprocess 70557 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70557 ']' 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70557 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70557 00:06:35.330 killing process with pid 70557 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70557' 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70557 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70557 00:06:35.330 spdk_app_start is called in Round 0. 00:06:35.330 Shutdown signal received, stop current app iteration 00:06:35.330 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:35.330 spdk_app_start is called in Round 1. 00:06:35.330 Shutdown signal received, stop current app iteration 00:06:35.330 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:35.330 spdk_app_start is called in Round 2. 00:06:35.330 Shutdown signal received, stop current app iteration 00:06:35.330 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:35.330 spdk_app_start is called in Round 3. 00:06:35.330 Shutdown signal received, stop current app iteration 00:06:35.330 16:32:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:35.330 16:32:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:35.330 00:06:35.330 real 0m17.400s 00:06:35.330 user 0m38.529s 00:06:35.330 sys 0m2.366s 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.330 16:32:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.330 ************************************ 00:06:35.330 END TEST app_repeat 00:06:35.330 ************************************ 00:06:35.331 16:32:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:35.331 16:32:34 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:35.331 16:32:34 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.331 16:32:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.331 16:32:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.331 ************************************ 00:06:35.331 START TEST cpu_locks 00:06:35.331 ************************************ 00:06:35.331 16:32:34 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:35.331 * Looking for test storage... 00:06:35.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:35.331 16:32:34 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:35.331 16:32:34 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:35.331 16:32:34 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:35.591 16:32:34 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.591 16:32:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:35.591 16:32:34 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.591 16:32:34 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:35.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.591 --rc genhtml_branch_coverage=1 00:06:35.591 --rc genhtml_function_coverage=1 00:06:35.591 --rc genhtml_legend=1 00:06:35.591 --rc geninfo_all_blocks=1 00:06:35.591 --rc geninfo_unexecuted_blocks=1 00:06:35.591 00:06:35.591 ' 00:06:35.591 16:32:34 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:35.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.591 --rc genhtml_branch_coverage=1 00:06:35.591 --rc genhtml_function_coverage=1 00:06:35.591 --rc genhtml_legend=1 00:06:35.591 --rc geninfo_all_blocks=1 00:06:35.591 --rc geninfo_unexecuted_blocks=1 00:06:35.591 00:06:35.591 ' 00:06:35.591 16:32:34 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:35.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.591 --rc genhtml_branch_coverage=1 00:06:35.591 --rc genhtml_function_coverage=1 00:06:35.591 --rc genhtml_legend=1 00:06:35.591 --rc geninfo_all_blocks=1 00:06:35.591 --rc geninfo_unexecuted_blocks=1 00:06:35.591 00:06:35.591 ' 00:06:35.591 16:32:34 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:35.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.591 --rc genhtml_branch_coverage=1 00:06:35.591 --rc genhtml_function_coverage=1 00:06:35.591 --rc genhtml_legend=1 00:06:35.591 --rc geninfo_all_blocks=1 00:06:35.591 --rc geninfo_unexecuted_blocks=1 00:06:35.591 00:06:35.591 ' 00:06:35.591 16:32:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:35.591 16:32:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:35.591 16:32:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:35.591 16:32:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:35.591 16:32:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.591 16:32:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.591 16:32:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.591 ************************************ 00:06:35.591 START TEST default_locks 00:06:35.591 ************************************ 00:06:35.591 16:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:35.591 16:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70990 00:06:35.591 16:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.591 16:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70990 00:06:35.591 16:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70990 ']' 00:06:35.591 16:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.591 16:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.591 16:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.591 16:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.591 16:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.591 [2024-12-07 16:32:34.409041] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:35.591 [2024-12-07 16:32:34.409170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70990 ] 00:06:35.851 [2024-12-07 16:32:34.575412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.851 [2024-12-07 16:32:34.620727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.419 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.419 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:36.419 16:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70990 00:06:36.419 16:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70990 00:06:36.419 16:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.687 16:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70990 00:06:36.687 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70990 ']' 00:06:36.687 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70990 00:06:36.687 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:36.687 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.688 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70990 00:06:36.688 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.688 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.688 killing process with pid 70990 00:06:36.688 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70990' 00:06:36.688 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70990 00:06:36.688 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70990 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70990 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70990 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70990 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70990 ']' 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.946 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70990) - No such process 00:06:36.946 ERROR: process (pid: 70990) is no longer running 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.946 00:06:36.946 real 0m1.537s 00:06:36.946 user 0m1.475s 00:06:36.946 sys 0m0.522s 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.946 16:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.946 ************************************ 00:06:36.946 END TEST default_locks 00:06:36.946 ************************************ 00:06:37.205 16:32:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:37.205 16:32:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.205 16:32:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.205 16:32:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.205 ************************************ 00:06:37.205 START TEST default_locks_via_rpc 00:06:37.205 ************************************ 00:06:37.205 16:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:37.205 16:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71037 00:06:37.205 16:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.205 16:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71037 00:06:37.205 16:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71037 ']' 00:06:37.205 16:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.205 16:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.205 16:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.205 16:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.205 16:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.205 [2024-12-07 16:32:36.007235] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:37.206 [2024-12-07 16:32:36.007363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71037 ] 00:06:37.464 [2024-12-07 16:32:36.152317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.464 [2024-12-07 16:32:36.198770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.031 16:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71037 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71037 00:06:38.032 16:32:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.600 16:32:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71037 00:06:38.600 16:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71037 ']' 00:06:38.600 16:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71037 00:06:38.600 16:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:38.600 16:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.600 16:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71037 00:06:38.600 16:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.600 16:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.600 killing process with pid 71037 00:06:38.600 16:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71037' 00:06:38.600 16:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71037 00:06:38.600 16:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71037 00:06:38.858 00:06:38.858 real 0m1.752s 00:06:38.858 user 0m1.725s 00:06:38.858 sys 0m0.606s 00:06:38.858 16:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.858 16:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.858 ************************************ 00:06:38.858 END TEST default_locks_via_rpc 00:06:38.858 ************************************ 00:06:38.858 16:32:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:38.858 16:32:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.858 16:32:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.858 16:32:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.858 ************************************ 00:06:38.858 START TEST non_locking_app_on_locked_coremask 00:06:38.858 ************************************ 00:06:38.858 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:38.858 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71085 00:06:38.858 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.858 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71085 /var/tmp/spdk.sock 00:06:38.858 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71085 ']' 00:06:38.858 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.858 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.858 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.858 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.858 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.118 [2024-12-07 16:32:37.822129] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:39.118 [2024-12-07 16:32:37.822269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71085 ] 00:06:39.118 [2024-12-07 16:32:37.982703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.376 [2024-12-07 16:32:38.030077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.943 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.943 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:39.943 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71101 00:06:39.943 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:39.943 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71101 /var/tmp/spdk2.sock 00:06:39.943 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71101 ']' 00:06:39.943 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.943 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.943 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.943 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.943 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.943 [2024-12-07 16:32:38.713158] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:39.943 [2024-12-07 16:32:38.713288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71101 ] 00:06:40.201 [2024-12-07 16:32:38.861115] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.201 [2024-12-07 16:32:38.861165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.201 [2024-12-07 16:32:38.947210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.767 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.767 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:40.767 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71085 00:06:40.767 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71085 00:06:40.767 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.336 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71085 00:06:41.336 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71085 ']' 00:06:41.336 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71085 00:06:41.336 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:41.336 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.336 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71085 00:06:41.336 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.336 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.336 killing process with pid 71085 00:06:41.336 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71085' 00:06:41.336 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71085 00:06:41.336 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71085 00:06:41.908 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71101 00:06:41.908 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71101 ']' 00:06:41.908 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71101 00:06:41.908 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:41.908 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.908 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71101 00:06:42.185 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.185 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.185 killing process with pid 71101 00:06:42.185 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71101' 00:06:42.185 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71101 00:06:42.185 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71101 00:06:42.459 00:06:42.459 real 0m3.467s 00:06:42.459 user 0m3.588s 00:06:42.459 sys 0m1.095s 00:06:42.459 16:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.459 16:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.459 ************************************ 00:06:42.459 END TEST non_locking_app_on_locked_coremask 00:06:42.459 ************************************ 00:06:42.459 16:32:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:42.459 16:32:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.459 16:32:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.459 16:32:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.459 ************************************ 00:06:42.459 START TEST locking_app_on_unlocked_coremask 00:06:42.459 ************************************ 00:06:42.459 16:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:42.459 16:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71164 00:06:42.459 16:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:42.459 16:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71164 /var/tmp/spdk.sock 00:06:42.459 16:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71164 ']' 00:06:42.459 16:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.459 16:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.459 16:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.459 16:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.459 16:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.459 [2024-12-07 16:32:41.354744] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.459 [2024-12-07 16:32:41.354863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71164 ] 00:06:42.718 [2024-12-07 16:32:41.514689] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.718 [2024-12-07 16:32:41.514788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.718 [2024-12-07 16:32:41.557874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.287 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.287 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:43.287 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.287 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71175 00:06:43.287 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71175 /var/tmp/spdk2.sock 00:06:43.287 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71175 ']' 00:06:43.287 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.287 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.287 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.287 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.287 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.547 [2024-12-07 16:32:42.231738] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:43.547 [2024-12-07 16:32:42.231861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71175 ] 00:06:43.547 [2024-12-07 16:32:42.381349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.806 [2024-12-07 16:32:42.470285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.376 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.376 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:44.376 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71175 00:06:44.376 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71175 00:06:44.376 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.315 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71164 00:06:45.315 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71164 ']' 00:06:45.315 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71164 00:06:45.315 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:45.315 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.315 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71164 00:06:45.315 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.315 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.315 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71164' 00:06:45.315 killing process with pid 71164 00:06:45.315 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71164 00:06:45.315 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71164 00:06:45.885 16:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71175 00:06:45.885 16:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71175 ']' 00:06:45.885 16:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71175 00:06:45.885 16:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:45.885 16:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.885 16:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71175 00:06:46.145 16:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.145 16:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.145 killing process with pid 71175 00:06:46.145 16:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71175' 00:06:46.145 16:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71175 00:06:46.145 16:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71175 00:06:46.412 00:06:46.412 real 0m3.908s 00:06:46.412 user 0m4.094s 00:06:46.412 sys 0m1.231s 00:06:46.412 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.412 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.412 ************************************ 00:06:46.412 END TEST locking_app_on_unlocked_coremask 00:06:46.412 ************************************ 00:06:46.412 16:32:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:46.412 16:32:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.412 16:32:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.412 16:32:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.412 ************************************ 00:06:46.412 START TEST locking_app_on_locked_coremask 00:06:46.412 ************************************ 00:06:46.412 16:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:46.412 16:32:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71244 00:06:46.412 16:32:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.412 16:32:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71244 /var/tmp/spdk.sock 00:06:46.412 16:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71244 ']' 00:06:46.412 16:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.412 16:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.412 16:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.412 16:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.412 16:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.672 [2024-12-07 16:32:45.331380] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:46.672 [2024-12-07 16:32:45.331510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71244 ] 00:06:46.672 [2024-12-07 16:32:45.490956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.672 [2024-12-07 16:32:45.538683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71262 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71262 /var/tmp/spdk2.sock 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71262 /var/tmp/spdk2.sock 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71262 /var/tmp/spdk2.sock 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71262 ']' 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.609 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.609 [2024-12-07 16:32:46.242986] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:47.609 [2024-12-07 16:32:46.243117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71262 ] 00:06:47.609 [2024-12-07 16:32:46.393279] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71244 has claimed it. 00:06:47.609 [2024-12-07 16:32:46.393354] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.179 ERROR: process (pid: 71262) is no longer running 00:06:48.179 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71262) - No such process 00:06:48.179 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.179 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:48.179 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:48.179 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.179 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.179 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.179 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71244 00:06:48.179 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71244 00:06:48.179 16:32:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.748 16:32:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71244 00:06:48.748 16:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71244 ']' 00:06:48.748 16:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71244 00:06:48.748 16:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:48.748 16:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.748 16:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71244 00:06:48.748 16:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.748 16:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.748 killing process with pid 71244 00:06:48.748 16:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71244' 00:06:48.748 16:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71244 00:06:48.748 16:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71244 00:06:49.008 00:06:49.008 real 0m2.545s 00:06:49.008 user 0m2.723s 00:06:49.008 sys 0m0.810s 00:06:49.008 16:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.008 16:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.008 ************************************ 00:06:49.008 END TEST locking_app_on_locked_coremask 00:06:49.008 ************************************ 00:06:49.008 16:32:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:49.008 16:32:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.008 16:32:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.008 16:32:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.008 ************************************ 00:06:49.008 START TEST locking_overlapped_coremask 00:06:49.008 ************************************ 00:06:49.008 16:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:49.008 16:32:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71315 00:06:49.008 16:32:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:49.008 16:32:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71315 /var/tmp/spdk.sock 00:06:49.008 16:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71315 ']' 00:06:49.008 16:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.008 16:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.008 16:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.008 16:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.008 16:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.267 [2024-12-07 16:32:47.943541] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:49.267 [2024-12-07 16:32:47.943679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71315 ] 00:06:49.267 [2024-12-07 16:32:48.103641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.267 [2024-12-07 16:32:48.148591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.267 [2024-12-07 16:32:48.148693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.267 [2024-12-07 16:32:48.148825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.204 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.204 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:50.204 16:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71333 00:06:50.204 16:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:50.204 16:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71333 /var/tmp/spdk2.sock 00:06:50.204 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:50.204 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71333 /var/tmp/spdk2.sock 00:06:50.204 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:50.204 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.205 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:50.205 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.205 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71333 /var/tmp/spdk2.sock 00:06:50.205 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71333 ']' 00:06:50.205 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.205 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.205 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.205 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.205 16:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.205 [2024-12-07 16:32:48.850420] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:50.205 [2024-12-07 16:32:48.850881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71333 ] 00:06:50.205 [2024-12-07 16:32:48.999929] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71315 has claimed it. 00:06:50.205 [2024-12-07 16:32:49.000000] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.773 ERROR: process (pid: 71333) is no longer running 00:06:50.773 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71333) - No such process 00:06:50.773 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.773 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:50.773 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:50.773 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.773 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.773 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.773 16:32:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71315 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71315 ']' 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71315 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71315 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.774 killing process with pid 71315 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71315' 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71315 00:06:50.774 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71315 00:06:51.033 00:06:51.033 real 0m2.055s 00:06:51.033 user 0m5.443s 00:06:51.033 sys 0m0.519s 00:06:51.033 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.033 16:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.033 ************************************ 00:06:51.033 END TEST locking_overlapped_coremask 00:06:51.033 ************************************ 00:06:51.292 16:32:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:51.292 16:32:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.292 16:32:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.292 16:32:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.292 ************************************ 00:06:51.292 START TEST locking_overlapped_coremask_via_rpc 00:06:51.292 ************************************ 00:06:51.292 16:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:51.292 16:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71375 00:06:51.292 16:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:51.292 16:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71375 /var/tmp/spdk.sock 00:06:51.292 16:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71375 ']' 00:06:51.292 16:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.292 16:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.292 16:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.292 16:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.292 16:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.292 [2024-12-07 16:32:50.068937] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:51.292 [2024-12-07 16:32:50.069421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71375 ] 00:06:51.551 [2024-12-07 16:32:50.215769] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.551 [2024-12-07 16:32:50.215838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.551 [2024-12-07 16:32:50.262509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.551 [2024-12-07 16:32:50.262614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.551 [2024-12-07 16:32:50.262772] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.122 16:32:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.122 16:32:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:52.122 16:32:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71393 00:06:52.122 16:32:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71393 /var/tmp/spdk2.sock 00:06:52.122 16:32:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:52.122 16:32:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71393 ']' 00:06:52.122 16:32:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.122 16:32:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.122 16:32:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.122 16:32:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.122 16:32:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.122 [2024-12-07 16:32:50.979203] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:52.122 [2024-12-07 16:32:50.979323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71393 ] 00:06:52.380 [2024-12-07 16:32:51.131931] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.380 [2024-12-07 16:32:51.131986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.380 [2024-12-07 16:32:51.235325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.380 [2024-12-07 16:32:51.235479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:52.380 [2024-12-07 16:32:51.235355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.948 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.207 [2024-12-07 16:32:51.845577] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71375 has claimed it. 00:06:53.207 request: 00:06:53.207 { 00:06:53.207 "method": "framework_enable_cpumask_locks", 00:06:53.207 "req_id": 1 00:06:53.207 } 00:06:53.207 Got JSON-RPC error response 00:06:53.207 response: 00:06:53.207 { 00:06:53.207 "code": -32603, 00:06:53.207 "message": "Failed to claim CPU core: 2" 00:06:53.207 } 00:06:53.207 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:53.207 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:53.207 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.207 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.207 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.207 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71375 /var/tmp/spdk.sock 00:06:53.207 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71375 ']' 00:06:53.207 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.207 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.207 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.207 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.207 16:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.207 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.207 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:53.207 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71393 /var/tmp/spdk2.sock 00:06:53.207 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71393 ']' 00:06:53.207 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.207 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.207 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.207 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.207 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.467 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.467 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:53.467 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.467 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.467 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.467 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.467 00:06:53.467 real 0m2.299s 00:06:53.467 user 0m1.048s 00:06:53.467 sys 0m0.179s 00:06:53.467 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.467 16:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.467 ************************************ 00:06:53.467 END TEST locking_overlapped_coremask_via_rpc 00:06:53.467 ************************************ 00:06:53.467 16:32:52 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.467 16:32:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71375 ]] 00:06:53.467 16:32:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71375 00:06:53.467 16:32:52 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71375 ']' 00:06:53.467 16:32:52 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71375 00:06:53.467 16:32:52 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:53.467 16:32:52 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.467 16:32:52 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71375 00:06:53.735 16:32:52 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.735 16:32:52 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.735 16:32:52 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71375' 00:06:53.735 killing process with pid 71375 00:06:53.735 16:32:52 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71375 00:06:53.735 16:32:52 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71375 00:06:53.995 16:32:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71393 ]] 00:06:53.995 16:32:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71393 00:06:53.995 16:32:52 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71393 ']' 00:06:53.995 16:32:52 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71393 00:06:53.995 16:32:52 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:53.995 16:32:52 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.995 16:32:52 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71393 00:06:53.995 16:32:52 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:53.995 16:32:52 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:53.995 killing process with pid 71393 00:06:53.995 16:32:52 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71393' 00:06:53.995 16:32:52 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71393 00:06:53.995 16:32:52 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71393 00:06:54.566 16:32:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.566 16:32:53 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:54.566 16:32:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71375 ]] 00:06:54.566 16:32:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71375 00:06:54.566 16:32:53 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71375 ']' 00:06:54.566 16:32:53 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71375 00:06:54.566 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71375) - No such process 00:06:54.566 Process with pid 71375 is not found 00:06:54.566 16:32:53 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71375 is not found' 00:06:54.566 16:32:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71393 ]] 00:06:54.566 16:32:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71393 00:06:54.566 16:32:53 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71393 ']' 00:06:54.566 16:32:53 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71393 00:06:54.566 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71393) - No such process 00:06:54.566 Process with pid 71393 is not found 00:06:54.566 16:32:53 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71393 is not found' 00:06:54.566 16:32:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.566 00:06:54.566 real 0m19.178s 00:06:54.566 user 0m31.596s 00:06:54.566 sys 0m6.096s 00:06:54.566 16:32:53 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.566 16:32:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.566 ************************************ 00:06:54.566 END TEST cpu_locks 00:06:54.566 ************************************ 00:06:54.566 00:06:54.566 real 0m48.003s 00:06:54.566 user 1m32.238s 00:06:54.566 sys 0m9.666s 00:06:54.566 16:32:53 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.566 16:32:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.566 ************************************ 00:06:54.566 END TEST event 00:06:54.566 ************************************ 00:06:54.566 16:32:53 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:54.566 16:32:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.566 16:32:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.566 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:06:54.566 ************************************ 00:06:54.566 START TEST thread 00:06:54.566 ************************************ 00:06:54.566 16:32:53 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:54.827 * Looking for test storage... 00:06:54.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:54.827 16:32:53 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:54.827 16:32:53 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:54.827 16:32:53 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:54.827 16:32:53 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:54.827 16:32:53 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.827 16:32:53 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.827 16:32:53 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.827 16:32:53 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.827 16:32:53 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.827 16:32:53 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.827 16:32:53 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.827 16:32:53 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.827 16:32:53 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.827 16:32:53 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.827 16:32:53 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.827 16:32:53 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:54.827 16:32:53 thread -- scripts/common.sh@345 -- # : 1 00:06:54.827 16:32:53 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.827 16:32:53 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.827 16:32:53 thread -- scripts/common.sh@365 -- # decimal 1 00:06:54.827 16:32:53 thread -- scripts/common.sh@353 -- # local d=1 00:06:54.827 16:32:53 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.827 16:32:53 thread -- scripts/common.sh@355 -- # echo 1 00:06:54.827 16:32:53 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.827 16:32:53 thread -- scripts/common.sh@366 -- # decimal 2 00:06:54.827 16:32:53 thread -- scripts/common.sh@353 -- # local d=2 00:06:54.827 16:32:53 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.827 16:32:53 thread -- scripts/common.sh@355 -- # echo 2 00:06:54.827 16:32:53 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.827 16:32:53 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.827 16:32:53 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.827 16:32:53 thread -- scripts/common.sh@368 -- # return 0 00:06:54.827 16:32:53 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.827 16:32:53 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:54.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.827 --rc genhtml_branch_coverage=1 00:06:54.827 --rc genhtml_function_coverage=1 00:06:54.827 --rc genhtml_legend=1 00:06:54.827 --rc geninfo_all_blocks=1 00:06:54.827 --rc geninfo_unexecuted_blocks=1 00:06:54.827 00:06:54.827 ' 00:06:54.827 16:32:53 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:54.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.827 --rc genhtml_branch_coverage=1 00:06:54.827 --rc genhtml_function_coverage=1 00:06:54.827 --rc genhtml_legend=1 00:06:54.827 --rc geninfo_all_blocks=1 00:06:54.827 --rc geninfo_unexecuted_blocks=1 00:06:54.827 00:06:54.827 ' 00:06:54.827 16:32:53 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:54.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.827 --rc genhtml_branch_coverage=1 00:06:54.827 --rc genhtml_function_coverage=1 00:06:54.827 --rc genhtml_legend=1 00:06:54.827 --rc geninfo_all_blocks=1 00:06:54.827 --rc geninfo_unexecuted_blocks=1 00:06:54.828 00:06:54.828 ' 00:06:54.828 16:32:53 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:54.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.828 --rc genhtml_branch_coverage=1 00:06:54.828 --rc genhtml_function_coverage=1 00:06:54.828 --rc genhtml_legend=1 00:06:54.828 --rc geninfo_all_blocks=1 00:06:54.828 --rc geninfo_unexecuted_blocks=1 00:06:54.828 00:06:54.828 ' 00:06:54.828 16:32:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.828 16:32:53 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:54.828 16:32:53 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.828 16:32:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.828 ************************************ 00:06:54.828 START TEST thread_poller_perf 00:06:54.828 ************************************ 00:06:54.828 16:32:53 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.828 [2024-12-07 16:32:53.635844] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:54.828 [2024-12-07 16:32:53.635981] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71520 ] 00:06:55.088 [2024-12-07 16:32:53.797117] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.088 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:55.088 [2024-12-07 16:32:53.844613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.056 [2024-12-07T16:32:54.955Z] ====================================== 00:06:56.056 [2024-12-07T16:32:54.955Z] busy:2300379280 (cyc) 00:06:56.056 [2024-12-07T16:32:54.955Z] total_run_count: 402000 00:06:56.056 [2024-12-07T16:32:54.955Z] tsc_hz: 2290000000 (cyc) 00:06:56.056 [2024-12-07T16:32:54.955Z] ====================================== 00:06:56.056 [2024-12-07T16:32:54.955Z] poller_cost: 5722 (cyc), 2498 (nsec) 00:06:56.056 00:06:56.056 real 0m1.350s 00:06:56.056 user 0m1.153s 00:06:56.056 sys 0m0.091s 00:06:56.056 16:32:54 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.056 16:32:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.056 ************************************ 00:06:56.056 END TEST thread_poller_perf 00:06:56.056 ************************************ 00:06:56.316 16:32:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.316 16:32:54 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:56.316 16:32:54 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.316 16:32:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.316 ************************************ 00:06:56.316 START TEST thread_poller_perf 00:06:56.316 ************************************ 00:06:56.316 16:32:55 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.316 [2024-12-07 16:32:55.049887] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:56.316 [2024-12-07 16:32:55.050002] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71562 ] 00:06:56.316 [2024-12-07 16:32:55.209207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.575 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.575 [2024-12-07 16:32:55.253513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.512 [2024-12-07T16:32:56.411Z] ====================================== 00:06:57.512 [2024-12-07T16:32:56.411Z] busy:2293406290 (cyc) 00:06:57.512 [2024-12-07T16:32:56.411Z] total_run_count: 5435000 00:06:57.512 [2024-12-07T16:32:56.411Z] tsc_hz: 2290000000 (cyc) 00:06:57.512 [2024-12-07T16:32:56.411Z] ====================================== 00:06:57.512 [2024-12-07T16:32:56.411Z] poller_cost: 421 (cyc), 183 (nsec) 00:06:57.512 00:06:57.512 real 0m1.341s 00:06:57.512 user 0m1.138s 00:06:57.512 sys 0m0.098s 00:06:57.512 16:32:56 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.512 16:32:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.512 ************************************ 00:06:57.512 END TEST thread_poller_perf 00:06:57.512 ************************************ 00:06:57.771 16:32:56 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:57.771 ************************************ 00:06:57.771 END TEST thread 00:06:57.771 ************************************ 00:06:57.771 00:06:57.771 real 0m3.054s 00:06:57.771 user 0m2.461s 00:06:57.771 sys 0m0.401s 00:06:57.771 16:32:56 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.771 16:32:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.771 16:32:56 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:57.771 16:32:56 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.771 16:32:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.771 16:32:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.771 16:32:56 -- common/autotest_common.sh@10 -- # set +x 00:06:57.771 ************************************ 00:06:57.771 START TEST app_cmdline 00:06:57.771 ************************************ 00:06:57.771 16:32:56 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.771 * Looking for test storage... 00:06:57.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:57.771 16:32:56 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:57.771 16:32:56 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:57.771 16:32:56 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:58.031 16:32:56 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.031 16:32:56 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:58.031 16:32:56 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.031 16:32:56 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:58.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.031 --rc genhtml_branch_coverage=1 00:06:58.031 --rc genhtml_function_coverage=1 00:06:58.031 --rc genhtml_legend=1 00:06:58.031 --rc geninfo_all_blocks=1 00:06:58.031 --rc geninfo_unexecuted_blocks=1 00:06:58.031 00:06:58.031 ' 00:06:58.031 16:32:56 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:58.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.032 --rc genhtml_branch_coverage=1 00:06:58.032 --rc genhtml_function_coverage=1 00:06:58.032 --rc genhtml_legend=1 00:06:58.032 --rc geninfo_all_blocks=1 00:06:58.032 --rc geninfo_unexecuted_blocks=1 00:06:58.032 00:06:58.032 ' 00:06:58.032 16:32:56 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:58.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.032 --rc genhtml_branch_coverage=1 00:06:58.032 --rc genhtml_function_coverage=1 00:06:58.032 --rc genhtml_legend=1 00:06:58.032 --rc geninfo_all_blocks=1 00:06:58.032 --rc geninfo_unexecuted_blocks=1 00:06:58.032 00:06:58.032 ' 00:06:58.032 16:32:56 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:58.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.032 --rc genhtml_branch_coverage=1 00:06:58.032 --rc genhtml_function_coverage=1 00:06:58.032 --rc genhtml_legend=1 00:06:58.032 --rc geninfo_all_blocks=1 00:06:58.032 --rc geninfo_unexecuted_blocks=1 00:06:58.032 00:06:58.032 ' 00:06:58.032 16:32:56 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:58.032 16:32:56 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71640 00:06:58.032 16:32:56 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:58.032 16:32:56 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71640 00:06:58.032 16:32:56 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71640 ']' 00:06:58.032 16:32:56 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.032 16:32:56 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.032 16:32:56 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.032 16:32:56 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.032 16:32:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.032 [2024-12-07 16:32:56.800252] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:58.032 [2024-12-07 16:32:56.800757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71640 ] 00:06:58.291 [2024-12-07 16:32:56.961381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.291 [2024-12-07 16:32:57.006036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.859 16:32:57 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.859 16:32:57 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:58.859 16:32:57 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:59.118 { 00:06:59.118 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:59.118 "fields": { 00:06:59.118 "major": 24, 00:06:59.118 "minor": 9, 00:06:59.118 "patch": 1, 00:06:59.118 "suffix": "-pre", 00:06:59.118 "commit": "b18e1bd62" 00:06:59.118 } 00:06:59.118 } 00:06:59.118 16:32:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:59.118 16:32:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:59.118 16:32:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:59.118 16:32:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:59.118 16:32:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.118 16:32:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:59.118 16:32:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.118 16:32:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:59.118 16:32:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:59.118 16:32:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:59.118 16:32:57 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.377 request: 00:06:59.377 { 00:06:59.377 "method": "env_dpdk_get_mem_stats", 00:06:59.377 "req_id": 1 00:06:59.377 } 00:06:59.377 Got JSON-RPC error response 00:06:59.377 response: 00:06:59.377 { 00:06:59.377 "code": -32601, 00:06:59.377 "message": "Method not found" 00:06:59.377 } 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.377 16:32:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71640 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71640 ']' 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71640 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71640 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.377 killing process with pid 71640 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71640' 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@969 -- # kill 71640 00:06:59.377 16:32:58 app_cmdline -- common/autotest_common.sh@974 -- # wait 71640 00:06:59.636 ************************************ 00:06:59.636 END TEST app_cmdline 00:06:59.636 ************************************ 00:06:59.636 00:06:59.636 real 0m2.016s 00:06:59.636 user 0m2.227s 00:06:59.636 sys 0m0.579s 00:06:59.636 16:32:58 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.636 16:32:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.893 16:32:58 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.893 16:32:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.893 16:32:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.893 16:32:58 -- common/autotest_common.sh@10 -- # set +x 00:06:59.893 ************************************ 00:06:59.893 START TEST version 00:06:59.893 ************************************ 00:06:59.893 16:32:58 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.893 * Looking for test storage... 00:06:59.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:59.893 16:32:58 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:59.893 16:32:58 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:59.893 16:32:58 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:59.893 16:32:58 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:59.893 16:32:58 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.893 16:32:58 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.893 16:32:58 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.893 16:32:58 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.893 16:32:58 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.893 16:32:58 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.893 16:32:58 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.893 16:32:58 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.893 16:32:58 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.893 16:32:58 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.893 16:32:58 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.893 16:32:58 version -- scripts/common.sh@344 -- # case "$op" in 00:06:59.893 16:32:58 version -- scripts/common.sh@345 -- # : 1 00:06:59.894 16:32:58 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.894 16:32:58 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.894 16:32:58 version -- scripts/common.sh@365 -- # decimal 1 00:06:59.894 16:32:58 version -- scripts/common.sh@353 -- # local d=1 00:06:59.894 16:32:58 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.894 16:32:58 version -- scripts/common.sh@355 -- # echo 1 00:06:59.894 16:32:58 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.894 16:32:58 version -- scripts/common.sh@366 -- # decimal 2 00:06:59.894 16:32:58 version -- scripts/common.sh@353 -- # local d=2 00:06:59.894 16:32:58 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.894 16:32:58 version -- scripts/common.sh@355 -- # echo 2 00:06:59.894 16:32:58 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.894 16:32:58 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.894 16:32:58 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.894 16:32:58 version -- scripts/common.sh@368 -- # return 0 00:06:59.894 16:32:58 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.894 16:32:58 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:59.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.894 --rc genhtml_branch_coverage=1 00:06:59.894 --rc genhtml_function_coverage=1 00:06:59.894 --rc genhtml_legend=1 00:06:59.894 --rc geninfo_all_blocks=1 00:06:59.894 --rc geninfo_unexecuted_blocks=1 00:06:59.894 00:06:59.894 ' 00:06:59.894 16:32:58 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:59.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.894 --rc genhtml_branch_coverage=1 00:06:59.894 --rc genhtml_function_coverage=1 00:06:59.894 --rc genhtml_legend=1 00:06:59.894 --rc geninfo_all_blocks=1 00:06:59.894 --rc geninfo_unexecuted_blocks=1 00:06:59.894 00:06:59.894 ' 00:06:59.894 16:32:58 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:59.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.894 --rc genhtml_branch_coverage=1 00:06:59.894 --rc genhtml_function_coverage=1 00:06:59.894 --rc genhtml_legend=1 00:06:59.894 --rc geninfo_all_blocks=1 00:06:59.894 --rc geninfo_unexecuted_blocks=1 00:06:59.894 00:06:59.894 ' 00:06:59.894 16:32:58 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:59.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.894 --rc genhtml_branch_coverage=1 00:06:59.894 --rc genhtml_function_coverage=1 00:06:59.894 --rc genhtml_legend=1 00:06:59.894 --rc geninfo_all_blocks=1 00:06:59.894 --rc geninfo_unexecuted_blocks=1 00:06:59.894 00:06:59.894 ' 00:06:59.894 16:32:58 version -- app/version.sh@17 -- # get_header_version major 00:07:00.152 16:32:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.152 16:32:58 version -- app/version.sh@14 -- # cut -f2 00:07:00.152 16:32:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.152 16:32:58 version -- app/version.sh@17 -- # major=24 00:07:00.152 16:32:58 version -- app/version.sh@18 -- # get_header_version minor 00:07:00.152 16:32:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.152 16:32:58 version -- app/version.sh@14 -- # cut -f2 00:07:00.152 16:32:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.152 16:32:58 version -- app/version.sh@18 -- # minor=9 00:07:00.152 16:32:58 version -- app/version.sh@19 -- # get_header_version patch 00:07:00.152 16:32:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.152 16:32:58 version -- app/version.sh@14 -- # cut -f2 00:07:00.152 16:32:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.152 16:32:58 version -- app/version.sh@19 -- # patch=1 00:07:00.152 16:32:58 version -- app/version.sh@20 -- # get_header_version suffix 00:07:00.152 16:32:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.152 16:32:58 version -- app/version.sh@14 -- # cut -f2 00:07:00.152 16:32:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.152 16:32:58 version -- app/version.sh@20 -- # suffix=-pre 00:07:00.152 16:32:58 version -- app/version.sh@22 -- # version=24.9 00:07:00.152 16:32:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:00.152 16:32:58 version -- app/version.sh@25 -- # version=24.9.1 00:07:00.152 16:32:58 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:00.152 16:32:58 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:00.153 16:32:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:00.153 16:32:58 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:00.153 16:32:58 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:00.153 ************************************ 00:07:00.153 END TEST version 00:07:00.153 ************************************ 00:07:00.153 00:07:00.153 real 0m0.317s 00:07:00.153 user 0m0.199s 00:07:00.153 sys 0m0.176s 00:07:00.153 16:32:58 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.153 16:32:58 version -- common/autotest_common.sh@10 -- # set +x 00:07:00.153 16:32:58 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:00.153 16:32:58 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:00.153 16:32:58 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:00.153 16:32:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.153 16:32:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.153 16:32:58 -- common/autotest_common.sh@10 -- # set +x 00:07:00.153 ************************************ 00:07:00.153 START TEST bdev_raid 00:07:00.153 ************************************ 00:07:00.153 16:32:58 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:00.411 * Looking for test storage... 00:07:00.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:00.411 16:32:59 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:00.411 16:32:59 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:07:00.411 16:32:59 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:00.411 16:32:59 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.411 16:32:59 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.412 16:32:59 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:00.412 16:32:59 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.412 16:32:59 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:00.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.412 --rc genhtml_branch_coverage=1 00:07:00.412 --rc genhtml_function_coverage=1 00:07:00.412 --rc genhtml_legend=1 00:07:00.412 --rc geninfo_all_blocks=1 00:07:00.412 --rc geninfo_unexecuted_blocks=1 00:07:00.412 00:07:00.412 ' 00:07:00.412 16:32:59 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:00.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.412 --rc genhtml_branch_coverage=1 00:07:00.412 --rc genhtml_function_coverage=1 00:07:00.412 --rc genhtml_legend=1 00:07:00.412 --rc geninfo_all_blocks=1 00:07:00.412 --rc geninfo_unexecuted_blocks=1 00:07:00.412 00:07:00.412 ' 00:07:00.412 16:32:59 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:00.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.412 --rc genhtml_branch_coverage=1 00:07:00.412 --rc genhtml_function_coverage=1 00:07:00.412 --rc genhtml_legend=1 00:07:00.412 --rc geninfo_all_blocks=1 00:07:00.412 --rc geninfo_unexecuted_blocks=1 00:07:00.412 00:07:00.412 ' 00:07:00.412 16:32:59 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:00.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.412 --rc genhtml_branch_coverage=1 00:07:00.412 --rc genhtml_function_coverage=1 00:07:00.412 --rc genhtml_legend=1 00:07:00.412 --rc geninfo_all_blocks=1 00:07:00.412 --rc geninfo_unexecuted_blocks=1 00:07:00.412 00:07:00.412 ' 00:07:00.412 16:32:59 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:00.412 16:32:59 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.412 16:32:59 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:00.412 16:32:59 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:00.412 16:32:59 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:00.412 16:32:59 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:00.412 16:32:59 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:00.412 16:32:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.412 16:32:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.412 16:32:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.412 ************************************ 00:07:00.412 START TEST raid1_resize_data_offset_test 00:07:00.412 ************************************ 00:07:00.412 16:32:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:07:00.412 16:32:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71811 00:07:00.412 Process raid pid: 71811 00:07:00.412 16:32:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71811' 00:07:00.412 16:32:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:00.412 16:32:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71811 00:07:00.412 16:32:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71811 ']' 00:07:00.412 16:32:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.412 16:32:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.412 16:32:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.412 16:32:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.412 16:32:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.412 [2024-12-07 16:32:59.265491] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:00.412 [2024-12-07 16:32:59.265638] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.670 [2024-12-07 16:32:59.427674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.670 [2024-12-07 16:32:59.474347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.670 [2024-12-07 16:32:59.516704] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.670 [2024-12-07 16:32:59.516757] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.240 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.240 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:07:01.240 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:01.240 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.240 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.240 malloc0 00:07:01.240 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.240 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:01.240 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.240 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.500 malloc1 00:07:01.500 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.500 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:01.500 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.500 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.500 null0 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.501 [2024-12-07 16:33:00.160513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:01.501 [2024-12-07 16:33:00.162281] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:01.501 [2024-12-07 16:33:00.162322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:01.501 [2024-12-07 16:33:00.162469] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:01.501 [2024-12-07 16:33:00.162481] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:01.501 [2024-12-07 16:33:00.162736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:01.501 [2024-12-07 16:33:00.162872] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:01.501 [2024-12-07 16:33:00.162895] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:01.501 [2024-12-07 16:33:00.163040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.501 [2024-12-07 16:33:00.244369] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.501 malloc2 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.501 [2024-12-07 16:33:00.372131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:01.501 [2024-12-07 16:33:00.376415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.501 [2024-12-07 16:33:00.378240] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.501 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:01.762 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.762 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:01.762 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71811 00:07:01.762 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71811 ']' 00:07:01.762 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71811 00:07:01.762 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:07:01.762 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.762 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71811 00:07:01.762 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.762 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.762 killing process with pid 71811 00:07:01.762 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71811' 00:07:01.762 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71811 00:07:01.762 [2024-12-07 16:33:00.469744] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.762 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71811 00:07:01.762 [2024-12-07 16:33:00.470064] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:01.762 [2024-12-07 16:33:00.470118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.762 [2024-12-07 16:33:00.470135] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:01.762 [2024-12-07 16:33:00.475496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.762 [2024-12-07 16:33:00.475811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.762 [2024-12-07 16:33:00.475853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:02.022 [2024-12-07 16:33:00.685681] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.282 16:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:02.282 00:07:02.282 real 0m1.735s 00:07:02.282 user 0m1.753s 00:07:02.282 sys 0m0.435s 00:07:02.282 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.282 16:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.282 ************************************ 00:07:02.282 END TEST raid1_resize_data_offset_test 00:07:02.282 ************************************ 00:07:02.282 16:33:00 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:02.282 16:33:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:02.282 16:33:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.282 16:33:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.282 ************************************ 00:07:02.282 START TEST raid0_resize_superblock_test 00:07:02.282 ************************************ 00:07:02.282 16:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:07:02.282 16:33:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:02.282 16:33:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71862 00:07:02.282 16:33:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:02.282 Process raid pid: 71862 00:07:02.282 16:33:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71862' 00:07:02.282 16:33:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71862 00:07:02.282 16:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71862 ']' 00:07:02.282 16:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.282 16:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.282 16:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.282 16:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.283 16:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.283 [2024-12-07 16:33:01.055667] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:02.283 [2024-12-07 16:33:01.055782] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.542 [2024-12-07 16:33:01.217754] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.542 [2024-12-07 16:33:01.264792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.542 [2024-12-07 16:33:01.307530] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.542 [2024-12-07 16:33:01.307579] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.112 16:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.112 16:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:03.112 16:33:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:03.112 16:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.112 16:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.112 malloc0 00:07:03.112 16:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.112 16:33:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:03.112 16:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.112 16:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.112 [2024-12-07 16:33:01.988299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:03.112 [2024-12-07 16:33:01.988388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.112 [2024-12-07 16:33:01.988414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:03.112 [2024-12-07 16:33:01.988434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.112 [2024-12-07 16:33:01.990676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.112 [2024-12-07 16:33:01.990715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:03.112 pt0 00:07:03.112 16:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.112 16:33:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:03.112 16:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.112 16:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 e9aef200-cd6a-414b-9212-707c26a297d9 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 017da2f7-3261-43e0-b00b-22c2895b455b 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 3831dbc7-4f83-49e6-b651-3123ffc5a6e4 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 [2024-12-07 16:33:02.123749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 017da2f7-3261-43e0-b00b-22c2895b455b is claimed 00:07:03.372 [2024-12-07 16:33:02.123831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3831dbc7-4f83-49e6-b651-3123ffc5a6e4 is claimed 00:07:03.372 [2024-12-07 16:33:02.123933] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:03.372 [2024-12-07 16:33:02.123945] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:03.372 [2024-12-07 16:33:02.124219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:03.372 [2024-12-07 16:33:02.124411] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:03.372 [2024-12-07 16:33:02.124437] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:03.372 [2024-12-07 16:33:02.124588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:03.372 [2024-12-07 16:33:02.207827] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 [2024-12-07 16:33:02.255634] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:03.372 [2024-12-07 16:33:02.255662] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '017da2f7-3261-43e0-b00b-22c2895b455b' was resized: old size 131072, new size 204800 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.372 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 [2024-12-07 16:33:02.267556] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:03.372 [2024-12-07 16:33:02.267581] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3831dbc7-4f83-49e6-b651-3123ffc5a6e4' was resized: old size 131072, new size 204800 00:07:03.372 [2024-12-07 16:33:02.267599] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:03.632 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.632 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:03.632 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.633 [2024-12-07 16:33:02.375520] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.633 [2024-12-07 16:33:02.415272] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:03.633 [2024-12-07 16:33:02.415355] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:03.633 [2024-12-07 16:33:02.415368] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:03.633 [2024-12-07 16:33:02.415382] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:03.633 [2024-12-07 16:33:02.415486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.633 [2024-12-07 16:33:02.415518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.633 [2024-12-07 16:33:02.415530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.633 [2024-12-07 16:33:02.427180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:03.633 [2024-12-07 16:33:02.427252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.633 [2024-12-07 16:33:02.427273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:03.633 [2024-12-07 16:33:02.427286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.633 [2024-12-07 16:33:02.429386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.633 [2024-12-07 16:33:02.429417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:03.633 [2024-12-07 16:33:02.430799] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 017da2f7-3261-43e0-b00b-22c2895b455b 00:07:03.633 [2024-12-07 16:33:02.430872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 017da2f7-3261-43e0-b00b-22c2895b455b is claimed 00:07:03.633 [2024-12-07 16:33:02.430966] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3831dbc7-4f83-49e6-b651-3123ffc5a6e4 00:07:03.633 [2024-12-07 16:33:02.430993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3831dbc7-4f83-49e6-b651-3123ffc5a6e4 is claimed 00:07:03.633 [2024-12-07 16:33:02.431082] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 3831dbc7-4f83-49e6-b651-3123ffc5a6e4 (2) smaller than existing raid bdev Raid (3) 00:07:03.633 [2024-12-07 16:33:02.431114] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 017da2f7-3261-43e0-b00b-22c2895b455b: File exists 00:07:03.633 [2024-12-07 16:33:02.431171] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:07:03.633 [2024-12-07 16:33:02.431181] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:03.633 [2024-12-07 16:33:02.431440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:03.633 [2024-12-07 16:33:02.431575] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:07:03.633 [2024-12-07 16:33:02.431586] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:07:03.633 [2024-12-07 16:33:02.431738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.633 pt0 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:03.633 [2024-12-07 16:33:02.455869] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71862 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71862 ']' 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71862 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.633 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71862 00:07:03.893 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.894 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.894 killing process with pid 71862 00:07:03.894 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71862' 00:07:03.894 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71862 00:07:03.894 [2024-12-07 16:33:02.535717] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.894 [2024-12-07 16:33:02.535790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.894 [2024-12-07 16:33:02.535837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.894 [2024-12-07 16:33:02.535848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:07:03.894 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71862 00:07:03.894 [2024-12-07 16:33:02.694263] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.152 16:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:04.152 00:07:04.152 real 0m1.964s 00:07:04.152 user 0m2.215s 00:07:04.152 sys 0m0.476s 00:07:04.152 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.152 16:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.152 ************************************ 00:07:04.152 END TEST raid0_resize_superblock_test 00:07:04.152 ************************************ 00:07:04.152 16:33:02 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:04.152 16:33:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:04.152 16:33:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.152 16:33:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.152 ************************************ 00:07:04.152 START TEST raid1_resize_superblock_test 00:07:04.152 ************************************ 00:07:04.152 16:33:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:07:04.152 16:33:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:04.152 16:33:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71933 00:07:04.152 16:33:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.152 Process raid pid: 71933 00:07:04.152 16:33:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71933' 00:07:04.152 16:33:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71933 00:07:04.152 16:33:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71933 ']' 00:07:04.152 16:33:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.152 16:33:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.152 16:33:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.152 16:33:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.152 16:33:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.410 [2024-12-07 16:33:03.086237] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:04.410 [2024-12-07 16:33:03.086368] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.410 [2024-12-07 16:33:03.246515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.410 [2024-12-07 16:33:03.292863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.668 [2024-12-07 16:33:03.335109] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.668 [2024-12-07 16:33:03.335153] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.237 16:33:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.237 16:33:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:05.237 16:33:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:05.237 16:33:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.237 16:33:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.237 malloc0 00:07:05.237 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.237 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:05.237 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.237 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.237 [2024-12-07 16:33:04.023515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:05.237 [2024-12-07 16:33:04.023583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.237 [2024-12-07 16:33:04.023609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:05.237 [2024-12-07 16:33:04.023620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.237 [2024-12-07 16:33:04.025767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.237 [2024-12-07 16:33:04.025804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:05.237 pt0 00:07:05.237 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.237 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:05.237 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.237 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.237 1ae8a18a-c074-4523-9bdb-4062046470b9 00:07:05.237 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.237 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:05.237 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.237 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.498 c5fc2f5d-815c-45ee-a637-2b24e9d2d6a4 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.498 229c6b37-b744-46f4-aede-3b2882ba1ea3 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.498 [2024-12-07 16:33:04.159494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev c5fc2f5d-815c-45ee-a637-2b24e9d2d6a4 is claimed 00:07:05.498 [2024-12-07 16:33:04.159582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 229c6b37-b744-46f4-aede-3b2882ba1ea3 is claimed 00:07:05.498 [2024-12-07 16:33:04.159695] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:05.498 [2024-12-07 16:33:04.159716] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:05.498 [2024-12-07 16:33:04.160003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:05.498 [2024-12-07 16:33:04.160176] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:05.498 [2024-12-07 16:33:04.160196] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:05.498 [2024-12-07 16:33:04.160336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.498 [2024-12-07 16:33:04.267519] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.498 [2024-12-07 16:33:04.315400] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:05.498 [2024-12-07 16:33:04.315428] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c5fc2f5d-815c-45ee-a637-2b24e9d2d6a4' was resized: old size 131072, new size 204800 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.498 [2024-12-07 16:33:04.327300] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:05.498 [2024-12-07 16:33:04.327327] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '229c6b37-b744-46f4-aede-3b2882ba1ea3' was resized: old size 131072, new size 204800 00:07:05.498 [2024-12-07 16:33:04.327357] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.498 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.759 [2024-12-07 16:33:04.419303] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.759 [2024-12-07 16:33:04.459025] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:05.759 [2024-12-07 16:33:04.459100] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:05.759 [2024-12-07 16:33:04.459144] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:05.759 [2024-12-07 16:33:04.459329] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:05.759 [2024-12-07 16:33:04.459510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.759 [2024-12-07 16:33:04.459567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.759 [2024-12-07 16:33:04.459581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.759 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.759 [2024-12-07 16:33:04.466934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:05.759 [2024-12-07 16:33:04.466991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.759 [2024-12-07 16:33:04.467029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:05.759 [2024-12-07 16:33:04.467043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.759 [2024-12-07 16:33:04.469447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.759 [2024-12-07 16:33:04.469607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:05.759 [2024-12-07 16:33:04.471458] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c5fc2f5d-815c-45ee-a637-2b24e9d2d6a4 00:07:05.759 [2024-12-07 16:33:04.471529] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev c5fc2f5d-815c-45ee-a637-2b24e9d2d6a4 is claimed 00:07:05.759 [2024-12-07 16:33:04.471618] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 229c6b37-b744-46f4-aede-3b2882ba1ea3 00:07:05.759 [2024-12-07 16:33:04.471642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 229c6b37-b744-46f4-aede-3b2882ba1ea3 is claimed 00:07:05.759 [2024-12-07 16:33:04.471795] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 229c6b37-b744-46f4-aede-3b2882ba1ea3 (2) smaller than existing raid bdev Raid (3) 00:07:05.759 [2024-12-07 16:33:04.471820] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev c5fc2f5d-815c-45ee-a637-2b24e9d2d6a4: File exists 00:07:05.760 pt0 00:07:05.760 [2024-12-07 16:33:04.471964] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:07:05.760 [2024-12-07 16:33:04.471980] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:05.760 [2024-12-07 16:33:04.472246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.760 [2024-12-07 16:33:04.472416] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:07:05.760 [2024-12-07 16:33:04.472426] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:05.760 [2024-12-07 16:33:04.472567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.760 [2024-12-07 16:33:04.487770] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71933 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71933 ']' 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71933 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71933 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.760 killing process with pid 71933 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71933' 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71933 00:07:05.760 [2024-12-07 16:33:04.573302] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.760 [2024-12-07 16:33:04.573404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.760 [2024-12-07 16:33:04.573469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.760 [2024-12-07 16:33:04.573487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:07:05.760 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71933 00:07:06.019 [2024-12-07 16:33:04.731890] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.279 16:33:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:06.279 00:07:06.279 real 0m1.971s 00:07:06.279 user 0m2.257s 00:07:06.279 sys 0m0.480s 00:07:06.279 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.279 16:33:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.279 ************************************ 00:07:06.279 END TEST raid1_resize_superblock_test 00:07:06.279 ************************************ 00:07:06.279 16:33:05 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:06.279 16:33:05 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:06.279 16:33:05 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:06.279 16:33:05 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:06.279 16:33:05 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:06.279 16:33:05 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:06.279 16:33:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:06.279 16:33:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.279 16:33:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.279 ************************************ 00:07:06.279 START TEST raid_function_test_raid0 00:07:06.279 ************************************ 00:07:06.279 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:07:06.279 16:33:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:06.279 16:33:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:06.279 16:33:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:06.279 16:33:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=72008 00:07:06.279 16:33:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.279 Process raid pid: 72008 00:07:06.279 16:33:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72008' 00:07:06.279 16:33:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 72008 00:07:06.279 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 72008 ']' 00:07:06.279 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.279 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.279 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.280 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.280 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:06.280 [2024-12-07 16:33:05.150550] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:06.280 [2024-12-07 16:33:05.150673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.538 [2024-12-07 16:33:05.311445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.538 [2024-12-07 16:33:05.355254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.538 [2024-12-07 16:33:05.397119] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.538 [2024-12-07 16:33:05.397159] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.104 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.104 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:07:07.104 16:33:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:07.104 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.104 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:07.104 Base_1 00:07:07.104 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.104 16:33:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:07.104 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.104 16:33:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:07.364 Base_2 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:07.364 [2024-12-07 16:33:06.020298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:07.364 [2024-12-07 16:33:06.022141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:07.364 [2024-12-07 16:33:06.022220] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:07.364 [2024-12-07 16:33:06.022238] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:07.364 [2024-12-07 16:33:06.022516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:07.364 [2024-12-07 16:33:06.022649] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:07.364 [2024-12-07 16:33:06.022677] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:07.364 [2024-12-07 16:33:06.022802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:07.364 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:07.624 [2024-12-07 16:33:06.263898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:07.624 /dev/nbd0 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.624 1+0 records in 00:07:07.624 1+0 records out 00:07:07.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387608 s, 10.6 MB/s 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.624 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:07.625 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:07.625 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:07.625 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.882 { 00:07:07.882 "nbd_device": "/dev/nbd0", 00:07:07.882 "bdev_name": "raid" 00:07:07.882 } 00:07:07.882 ]' 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.882 { 00:07:07.882 "nbd_device": "/dev/nbd0", 00:07:07.882 "bdev_name": "raid" 00:07:07.882 } 00:07:07.882 ]' 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:07.882 4096+0 records in 00:07:07.882 4096+0 records out 00:07:07.882 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0340558 s, 61.6 MB/s 00:07:07.882 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:08.140 4096+0 records in 00:07:08.140 4096+0 records out 00:07:08.140 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.172614 s, 12.1 MB/s 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:08.140 128+0 records in 00:07:08.140 128+0 records out 00:07:08.140 65536 bytes (66 kB, 64 KiB) copied, 0.000957612 s, 68.4 MB/s 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:08.140 2035+0 records in 00:07:08.140 2035+0 records out 00:07:08.140 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0093121 s, 112 MB/s 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:08.140 456+0 records in 00:07:08.140 456+0 records out 00:07:08.140 233472 bytes (233 kB, 228 KiB) copied, 0.00245363 s, 95.2 MB/s 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.140 16:33:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:08.397 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.397 [2024-12-07 16:33:07.105335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.397 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.397 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.397 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.397 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.397 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.397 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:08.397 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.397 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:08.397 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:08.397 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 72008 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 72008 ']' 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 72008 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72008 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.657 killing process with pid 72008 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72008' 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 72008 00:07:08.657 [2024-12-07 16:33:07.414863] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.657 [2024-12-07 16:33:07.415008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.657 16:33:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 72008 00:07:08.657 [2024-12-07 16:33:07.415074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.657 [2024-12-07 16:33:07.415099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:07:08.657 [2024-12-07 16:33:07.438235] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.917 16:33:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:08.917 00:07:08.917 real 0m2.606s 00:07:08.917 user 0m3.253s 00:07:08.917 sys 0m0.856s 00:07:08.917 16:33:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.917 16:33:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:08.917 ************************************ 00:07:08.917 END TEST raid_function_test_raid0 00:07:08.917 ************************************ 00:07:08.917 16:33:07 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:08.917 16:33:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.917 16:33:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.917 16:33:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.917 ************************************ 00:07:08.917 START TEST raid_function_test_concat 00:07:08.917 ************************************ 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=72122 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.917 Process raid pid: 72122 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72122' 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 72122 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 72122 ']' 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.917 16:33:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.176 [2024-12-07 16:33:07.821138] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:09.176 [2024-12-07 16:33:07.821256] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.176 [2024-12-07 16:33:07.977865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.177 [2024-12-07 16:33:08.025699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.177 [2024-12-07 16:33:08.068482] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.177 [2024-12-07 16:33:08.068524] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:10.115 Base_1 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:10.115 Base_2 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:10.115 [2024-12-07 16:33:08.712179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:10.115 [2024-12-07 16:33:08.714063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:10.115 [2024-12-07 16:33:08.714136] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:10.115 [2024-12-07 16:33:08.714154] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:10.115 [2024-12-07 16:33:08.714467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:10.115 [2024-12-07 16:33:08.714612] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:10.115 [2024-12-07 16:33:08.714630] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:10.115 [2024-12-07 16:33:08.714774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:10.115 [2024-12-07 16:33:08.927833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:10.115 /dev/nbd0 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.115 1+0 records in 00:07:10.115 1+0 records out 00:07:10.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380292 s, 10.8 MB/s 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.115 16:33:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:10.377 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd0", 00:07:10.378 "bdev_name": "raid" 00:07:10.378 } 00:07:10.378 ]' 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.378 { 00:07:10.378 "nbd_device": "/dev/nbd0", 00:07:10.378 "bdev_name": "raid" 00:07:10.378 } 00:07:10.378 ]' 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:10.378 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:10.638 4096+0 records in 00:07:10.638 4096+0 records out 00:07:10.638 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0376389 s, 55.7 MB/s 00:07:10.638 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:10.638 4096+0 records in 00:07:10.638 4096+0 records out 00:07:10.638 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.179182 s, 11.7 MB/s 00:07:10.638 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:10.638 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.638 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:10.638 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.638 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:10.638 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:10.638 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:10.638 128+0 records in 00:07:10.638 128+0 records out 00:07:10.638 65536 bytes (66 kB, 64 KiB) copied, 0.00108285 s, 60.5 MB/s 00:07:10.638 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:10.638 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.638 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:10.899 2035+0 records in 00:07:10.899 2035+0 records out 00:07:10.899 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0126309 s, 82.5 MB/s 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:10.899 456+0 records in 00:07:10.899 456+0 records out 00:07:10.899 233472 bytes (233 kB, 228 KiB) copied, 0.00257843 s, 90.5 MB/s 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.899 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:11.159 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.159 [2024-12-07 16:33:09.819159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.159 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.159 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.159 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.159 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.159 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.159 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:11.159 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.159 16:33:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:11.159 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:11.160 16:33:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:11.160 16:33:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.160 16:33:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.160 16:33:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 72122 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 72122 ']' 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 72122 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72122 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.420 killing process with pid 72122 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72122' 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 72122 00:07:11.420 [2024-12-07 16:33:10.112444] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.420 [2024-12-07 16:33:10.112593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.420 [2024-12-07 16:33:10.112667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.420 [2024-12-07 16:33:10.112689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:07:11.420 16:33:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 72122 00:07:11.420 [2024-12-07 16:33:10.154849] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.679 16:33:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:11.679 00:07:11.679 real 0m2.777s 00:07:11.679 user 0m3.392s 00:07:11.679 sys 0m0.892s 00:07:11.679 16:33:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.679 16:33:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:11.679 ************************************ 00:07:11.679 END TEST raid_function_test_concat 00:07:11.679 ************************************ 00:07:11.679 16:33:10 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:11.679 16:33:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:11.940 16:33:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.940 16:33:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.940 ************************************ 00:07:11.940 START TEST raid0_resize_test 00:07:11.940 ************************************ 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72234 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:11.940 Process raid pid: 72234 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72234' 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72234 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72234 ']' 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.940 16:33:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.940 [2024-12-07 16:33:10.672702] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:11.940 [2024-12-07 16:33:10.672823] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.940 [2024-12-07 16:33:10.833770] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.200 [2024-12-07 16:33:10.904375] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.200 [2024-12-07 16:33:10.981925] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.200 [2024-12-07 16:33:10.981970] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.769 Base_1 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.769 Base_2 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.769 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.769 [2024-12-07 16:33:11.519277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:12.769 [2024-12-07 16:33:11.521289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:12.769 [2024-12-07 16:33:11.521353] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:12.769 [2024-12-07 16:33:11.521371] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:12.769 [2024-12-07 16:33:11.521616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:12.769 [2024-12-07 16:33:11.521733] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:12.769 [2024-12-07 16:33:11.521747] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:12.770 [2024-12-07 16:33:11.521862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.770 [2024-12-07 16:33:11.531227] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.770 [2024-12-07 16:33:11.531253] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:12.770 true 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.770 [2024-12-07 16:33:11.547426] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.770 [2024-12-07 16:33:11.591153] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.770 [2024-12-07 16:33:11.591175] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:12.770 [2024-12-07 16:33:11.591195] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:12.770 true 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.770 [2024-12-07 16:33:11.607287] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72234 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72234 ']' 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 72234 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.770 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72234 00:07:13.030 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.030 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.030 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72234' 00:07:13.030 killing process with pid 72234 00:07:13.030 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 72234 00:07:13.030 [2024-12-07 16:33:11.684506] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.030 [2024-12-07 16:33:11.684598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.030 [2024-12-07 16:33:11.684642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.030 [2024-12-07 16:33:11.684655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:13.030 16:33:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 72234 00:07:13.030 [2024-12-07 16:33:11.686604] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.289 16:33:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:13.289 00:07:13.290 real 0m1.468s 00:07:13.290 user 0m1.531s 00:07:13.290 sys 0m0.392s 00:07:13.290 16:33:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.290 16:33:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.290 ************************************ 00:07:13.290 END TEST raid0_resize_test 00:07:13.290 ************************************ 00:07:13.290 16:33:12 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:13.290 16:33:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.290 16:33:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.290 16:33:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.290 ************************************ 00:07:13.290 START TEST raid1_resize_test 00:07:13.290 ************************************ 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72284 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.290 Process raid pid: 72284 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72284' 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72284 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72284 ']' 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.290 16:33:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.549 [2024-12-07 16:33:12.200555] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.549 [2024-12-07 16:33:12.200702] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.550 [2024-12-07 16:33:12.361686] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.550 [2024-12-07 16:33:12.430668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.809 [2024-12-07 16:33:12.506282] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.809 [2024-12-07 16:33:12.506331] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.377 Base_1 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.377 Base_2 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.377 [2024-12-07 16:33:13.068996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:14.377 [2024-12-07 16:33:13.071052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:14.377 [2024-12-07 16:33:13.071142] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:14.377 [2024-12-07 16:33:13.071153] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:14.377 [2024-12-07 16:33:13.071431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:14.377 [2024-12-07 16:33:13.071558] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:14.377 [2024-12-07 16:33:13.071582] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:14.377 [2024-12-07 16:33:13.071730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.377 [2024-12-07 16:33:13.080949] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:14.377 [2024-12-07 16:33:13.080977] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:14.377 true 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.377 [2024-12-07 16:33:13.097106] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.377 [2024-12-07 16:33:13.140831] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:14.377 [2024-12-07 16:33:13.140852] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:14.377 [2024-12-07 16:33:13.140873] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:14.377 true 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:14.377 [2024-12-07 16:33:13.152978] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72284 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72284 ']' 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 72284 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72284 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72284' 00:07:14.377 killing process with pid 72284 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 72284 00:07:14.377 [2024-12-07 16:33:13.238200] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.377 [2024-12-07 16:33:13.238280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.377 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 72284 00:07:14.377 [2024-12-07 16:33:13.238726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.377 [2024-12-07 16:33:13.238744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:14.377 [2024-12-07 16:33:13.240490] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.944 16:33:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:14.944 00:07:14.944 real 0m1.494s 00:07:14.945 user 0m1.596s 00:07:14.945 sys 0m0.371s 00:07:14.945 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.945 16:33:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.945 ************************************ 00:07:14.945 END TEST raid1_resize_test 00:07:14.945 ************************************ 00:07:14.945 16:33:13 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:14.945 16:33:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:14.945 16:33:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:14.945 16:33:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:14.945 16:33:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.945 16:33:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.945 ************************************ 00:07:14.945 START TEST raid_state_function_test 00:07:14.945 ************************************ 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72336 00:07:14.945 Process raid pid: 72336 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72336' 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72336 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72336 ']' 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.945 16:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.945 [2024-12-07 16:33:13.772211] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:14.945 [2024-12-07 16:33:13.772327] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.203 [2024-12-07 16:33:13.932233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.203 [2024-12-07 16:33:14.008180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.203 [2024-12-07 16:33:14.084586] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.203 [2024-12-07 16:33:14.084625] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.770 [2024-12-07 16:33:14.596254] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:15.770 [2024-12-07 16:33:14.596322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:15.770 [2024-12-07 16:33:14.596353] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.770 [2024-12-07 16:33:14.596365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.770 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.771 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.771 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.771 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.771 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.771 16:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.771 16:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.771 16:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.771 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.771 "name": "Existed_Raid", 00:07:15.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.771 "strip_size_kb": 64, 00:07:15.771 "state": "configuring", 00:07:15.771 "raid_level": "raid0", 00:07:15.771 "superblock": false, 00:07:15.771 "num_base_bdevs": 2, 00:07:15.771 "num_base_bdevs_discovered": 0, 00:07:15.771 "num_base_bdevs_operational": 2, 00:07:15.771 "base_bdevs_list": [ 00:07:15.771 { 00:07:15.771 "name": "BaseBdev1", 00:07:15.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.771 "is_configured": false, 00:07:15.771 "data_offset": 0, 00:07:15.771 "data_size": 0 00:07:15.771 }, 00:07:15.771 { 00:07:15.771 "name": "BaseBdev2", 00:07:15.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.771 "is_configured": false, 00:07:15.771 "data_offset": 0, 00:07:15.771 "data_size": 0 00:07:15.771 } 00:07:15.771 ] 00:07:15.771 }' 00:07:15.771 16:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.771 16:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.341 [2024-12-07 16:33:15.051486] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:16.341 [2024-12-07 16:33:15.051556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.341 [2024-12-07 16:33:15.063474] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:16.341 [2024-12-07 16:33:15.063518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:16.341 [2024-12-07 16:33:15.063526] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:16.341 [2024-12-07 16:33:15.063536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.341 [2024-12-07 16:33:15.090313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:16.341 BaseBdev1 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.341 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.341 [ 00:07:16.341 { 00:07:16.341 "name": "BaseBdev1", 00:07:16.341 "aliases": [ 00:07:16.341 "316d0eb6-2619-4a19-8ae2-8754acdad77c" 00:07:16.341 ], 00:07:16.341 "product_name": "Malloc disk", 00:07:16.341 "block_size": 512, 00:07:16.341 "num_blocks": 65536, 00:07:16.341 "uuid": "316d0eb6-2619-4a19-8ae2-8754acdad77c", 00:07:16.341 "assigned_rate_limits": { 00:07:16.341 "rw_ios_per_sec": 0, 00:07:16.341 "rw_mbytes_per_sec": 0, 00:07:16.341 "r_mbytes_per_sec": 0, 00:07:16.341 "w_mbytes_per_sec": 0 00:07:16.341 }, 00:07:16.341 "claimed": true, 00:07:16.341 "claim_type": "exclusive_write", 00:07:16.341 "zoned": false, 00:07:16.341 "supported_io_types": { 00:07:16.341 "read": true, 00:07:16.341 "write": true, 00:07:16.341 "unmap": true, 00:07:16.341 "flush": true, 00:07:16.341 "reset": true, 00:07:16.341 "nvme_admin": false, 00:07:16.341 "nvme_io": false, 00:07:16.341 "nvme_io_md": false, 00:07:16.341 "write_zeroes": true, 00:07:16.341 "zcopy": true, 00:07:16.341 "get_zone_info": false, 00:07:16.341 "zone_management": false, 00:07:16.341 "zone_append": false, 00:07:16.341 "compare": false, 00:07:16.341 "compare_and_write": false, 00:07:16.341 "abort": true, 00:07:16.341 "seek_hole": false, 00:07:16.341 "seek_data": false, 00:07:16.341 "copy": true, 00:07:16.341 "nvme_iov_md": false 00:07:16.341 }, 00:07:16.341 "memory_domains": [ 00:07:16.341 { 00:07:16.341 "dma_device_id": "system", 00:07:16.341 "dma_device_type": 1 00:07:16.341 }, 00:07:16.341 { 00:07:16.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.341 "dma_device_type": 2 00:07:16.341 } 00:07:16.341 ], 00:07:16.342 "driver_specific": {} 00:07:16.342 } 00:07:16.342 ] 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.342 "name": "Existed_Raid", 00:07:16.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.342 "strip_size_kb": 64, 00:07:16.342 "state": "configuring", 00:07:16.342 "raid_level": "raid0", 00:07:16.342 "superblock": false, 00:07:16.342 "num_base_bdevs": 2, 00:07:16.342 "num_base_bdevs_discovered": 1, 00:07:16.342 "num_base_bdevs_operational": 2, 00:07:16.342 "base_bdevs_list": [ 00:07:16.342 { 00:07:16.342 "name": "BaseBdev1", 00:07:16.342 "uuid": "316d0eb6-2619-4a19-8ae2-8754acdad77c", 00:07:16.342 "is_configured": true, 00:07:16.342 "data_offset": 0, 00:07:16.342 "data_size": 65536 00:07:16.342 }, 00:07:16.342 { 00:07:16.342 "name": "BaseBdev2", 00:07:16.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.342 "is_configured": false, 00:07:16.342 "data_offset": 0, 00:07:16.342 "data_size": 0 00:07:16.342 } 00:07:16.342 ] 00:07:16.342 }' 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.342 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 [2024-12-07 16:33:15.597500] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:16.909 [2024-12-07 16:33:15.597566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 [2024-12-07 16:33:15.609514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:16.909 [2024-12-07 16:33:15.611651] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:16.909 [2024-12-07 16:33:15.611695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.909 "name": "Existed_Raid", 00:07:16.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.909 "strip_size_kb": 64, 00:07:16.909 "state": "configuring", 00:07:16.909 "raid_level": "raid0", 00:07:16.909 "superblock": false, 00:07:16.909 "num_base_bdevs": 2, 00:07:16.909 "num_base_bdevs_discovered": 1, 00:07:16.909 "num_base_bdevs_operational": 2, 00:07:16.909 "base_bdevs_list": [ 00:07:16.909 { 00:07:16.909 "name": "BaseBdev1", 00:07:16.909 "uuid": "316d0eb6-2619-4a19-8ae2-8754acdad77c", 00:07:16.909 "is_configured": true, 00:07:16.909 "data_offset": 0, 00:07:16.909 "data_size": 65536 00:07:16.909 }, 00:07:16.909 { 00:07:16.909 "name": "BaseBdev2", 00:07:16.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.909 "is_configured": false, 00:07:16.909 "data_offset": 0, 00:07:16.909 "data_size": 0 00:07:16.909 } 00:07:16.909 ] 00:07:16.909 }' 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.909 16:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.168 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:17.168 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.168 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.427 [2024-12-07 16:33:16.078549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:17.427 [2024-12-07 16:33:16.078693] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:17.427 [2024-12-07 16:33:16.078727] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:17.427 [2024-12-07 16:33:16.079119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:17.427 [2024-12-07 16:33:16.079331] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:17.427 [2024-12-07 16:33:16.079407] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:17.427 [2024-12-07 16:33:16.079719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.427 BaseBdev2 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.427 [ 00:07:17.427 { 00:07:17.427 "name": "BaseBdev2", 00:07:17.427 "aliases": [ 00:07:17.427 "6b4922c7-0ba3-4a03-91e2-4e1b37bed61c" 00:07:17.427 ], 00:07:17.427 "product_name": "Malloc disk", 00:07:17.427 "block_size": 512, 00:07:17.427 "num_blocks": 65536, 00:07:17.427 "uuid": "6b4922c7-0ba3-4a03-91e2-4e1b37bed61c", 00:07:17.427 "assigned_rate_limits": { 00:07:17.427 "rw_ios_per_sec": 0, 00:07:17.427 "rw_mbytes_per_sec": 0, 00:07:17.427 "r_mbytes_per_sec": 0, 00:07:17.427 "w_mbytes_per_sec": 0 00:07:17.427 }, 00:07:17.427 "claimed": true, 00:07:17.427 "claim_type": "exclusive_write", 00:07:17.427 "zoned": false, 00:07:17.427 "supported_io_types": { 00:07:17.427 "read": true, 00:07:17.427 "write": true, 00:07:17.427 "unmap": true, 00:07:17.427 "flush": true, 00:07:17.427 "reset": true, 00:07:17.427 "nvme_admin": false, 00:07:17.427 "nvme_io": false, 00:07:17.427 "nvme_io_md": false, 00:07:17.427 "write_zeroes": true, 00:07:17.427 "zcopy": true, 00:07:17.427 "get_zone_info": false, 00:07:17.427 "zone_management": false, 00:07:17.427 "zone_append": false, 00:07:17.427 "compare": false, 00:07:17.427 "compare_and_write": false, 00:07:17.427 "abort": true, 00:07:17.427 "seek_hole": false, 00:07:17.427 "seek_data": false, 00:07:17.427 "copy": true, 00:07:17.427 "nvme_iov_md": false 00:07:17.427 }, 00:07:17.427 "memory_domains": [ 00:07:17.427 { 00:07:17.427 "dma_device_id": "system", 00:07:17.427 "dma_device_type": 1 00:07:17.427 }, 00:07:17.427 { 00:07:17.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.427 "dma_device_type": 2 00:07:17.427 } 00:07:17.427 ], 00:07:17.427 "driver_specific": {} 00:07:17.427 } 00:07:17.427 ] 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.427 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.428 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.428 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.428 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.428 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.428 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.428 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.428 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.428 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.428 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.428 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.428 "name": "Existed_Raid", 00:07:17.428 "uuid": "e4b7d736-dc93-4aca-a04c-f0686fb04a2f", 00:07:17.428 "strip_size_kb": 64, 00:07:17.428 "state": "online", 00:07:17.428 "raid_level": "raid0", 00:07:17.428 "superblock": false, 00:07:17.428 "num_base_bdevs": 2, 00:07:17.428 "num_base_bdevs_discovered": 2, 00:07:17.428 "num_base_bdevs_operational": 2, 00:07:17.428 "base_bdevs_list": [ 00:07:17.428 { 00:07:17.428 "name": "BaseBdev1", 00:07:17.428 "uuid": "316d0eb6-2619-4a19-8ae2-8754acdad77c", 00:07:17.428 "is_configured": true, 00:07:17.428 "data_offset": 0, 00:07:17.428 "data_size": 65536 00:07:17.428 }, 00:07:17.428 { 00:07:17.428 "name": "BaseBdev2", 00:07:17.428 "uuid": "6b4922c7-0ba3-4a03-91e2-4e1b37bed61c", 00:07:17.428 "is_configured": true, 00:07:17.428 "data_offset": 0, 00:07:17.428 "data_size": 65536 00:07:17.428 } 00:07:17.428 ] 00:07:17.428 }' 00:07:17.428 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.428 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.687 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:17.687 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:17.687 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:17.687 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:17.687 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:17.687 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:17.687 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:17.687 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:17.687 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.687 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.687 [2024-12-07 16:33:16.538201] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.687 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.687 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:17.687 "name": "Existed_Raid", 00:07:17.687 "aliases": [ 00:07:17.687 "e4b7d736-dc93-4aca-a04c-f0686fb04a2f" 00:07:17.687 ], 00:07:17.687 "product_name": "Raid Volume", 00:07:17.687 "block_size": 512, 00:07:17.687 "num_blocks": 131072, 00:07:17.687 "uuid": "e4b7d736-dc93-4aca-a04c-f0686fb04a2f", 00:07:17.687 "assigned_rate_limits": { 00:07:17.687 "rw_ios_per_sec": 0, 00:07:17.687 "rw_mbytes_per_sec": 0, 00:07:17.687 "r_mbytes_per_sec": 0, 00:07:17.687 "w_mbytes_per_sec": 0 00:07:17.687 }, 00:07:17.687 "claimed": false, 00:07:17.687 "zoned": false, 00:07:17.687 "supported_io_types": { 00:07:17.687 "read": true, 00:07:17.687 "write": true, 00:07:17.687 "unmap": true, 00:07:17.687 "flush": true, 00:07:17.687 "reset": true, 00:07:17.687 "nvme_admin": false, 00:07:17.687 "nvme_io": false, 00:07:17.687 "nvme_io_md": false, 00:07:17.687 "write_zeroes": true, 00:07:17.687 "zcopy": false, 00:07:17.687 "get_zone_info": false, 00:07:17.687 "zone_management": false, 00:07:17.687 "zone_append": false, 00:07:17.687 "compare": false, 00:07:17.687 "compare_and_write": false, 00:07:17.687 "abort": false, 00:07:17.687 "seek_hole": false, 00:07:17.687 "seek_data": false, 00:07:17.687 "copy": false, 00:07:17.687 "nvme_iov_md": false 00:07:17.687 }, 00:07:17.687 "memory_domains": [ 00:07:17.687 { 00:07:17.687 "dma_device_id": "system", 00:07:17.687 "dma_device_type": 1 00:07:17.687 }, 00:07:17.687 { 00:07:17.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.687 "dma_device_type": 2 00:07:17.687 }, 00:07:17.687 { 00:07:17.687 "dma_device_id": "system", 00:07:17.687 "dma_device_type": 1 00:07:17.687 }, 00:07:17.687 { 00:07:17.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.687 "dma_device_type": 2 00:07:17.687 } 00:07:17.687 ], 00:07:17.687 "driver_specific": { 00:07:17.687 "raid": { 00:07:17.687 "uuid": "e4b7d736-dc93-4aca-a04c-f0686fb04a2f", 00:07:17.687 "strip_size_kb": 64, 00:07:17.687 "state": "online", 00:07:17.687 "raid_level": "raid0", 00:07:17.687 "superblock": false, 00:07:17.687 "num_base_bdevs": 2, 00:07:17.687 "num_base_bdevs_discovered": 2, 00:07:17.687 "num_base_bdevs_operational": 2, 00:07:17.687 "base_bdevs_list": [ 00:07:17.687 { 00:07:17.687 "name": "BaseBdev1", 00:07:17.687 "uuid": "316d0eb6-2619-4a19-8ae2-8754acdad77c", 00:07:17.687 "is_configured": true, 00:07:17.687 "data_offset": 0, 00:07:17.687 "data_size": 65536 00:07:17.687 }, 00:07:17.687 { 00:07:17.687 "name": "BaseBdev2", 00:07:17.687 "uuid": "6b4922c7-0ba3-4a03-91e2-4e1b37bed61c", 00:07:17.687 "is_configured": true, 00:07:17.687 "data_offset": 0, 00:07:17.687 "data_size": 65536 00:07:17.687 } 00:07:17.687 ] 00:07:17.687 } 00:07:17.687 } 00:07:17.687 }' 00:07:17.687 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:17.946 BaseBdev2' 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.946 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.947 [2024-12-07 16:33:16.769596] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:17.947 [2024-12-07 16:33:16.769647] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.947 [2024-12-07 16:33:16.769704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.947 "name": "Existed_Raid", 00:07:17.947 "uuid": "e4b7d736-dc93-4aca-a04c-f0686fb04a2f", 00:07:17.947 "strip_size_kb": 64, 00:07:17.947 "state": "offline", 00:07:17.947 "raid_level": "raid0", 00:07:17.947 "superblock": false, 00:07:17.947 "num_base_bdevs": 2, 00:07:17.947 "num_base_bdevs_discovered": 1, 00:07:17.947 "num_base_bdevs_operational": 1, 00:07:17.947 "base_bdevs_list": [ 00:07:17.947 { 00:07:17.947 "name": null, 00:07:17.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.947 "is_configured": false, 00:07:17.947 "data_offset": 0, 00:07:17.947 "data_size": 65536 00:07:17.947 }, 00:07:17.947 { 00:07:17.947 "name": "BaseBdev2", 00:07:17.947 "uuid": "6b4922c7-0ba3-4a03-91e2-4e1b37bed61c", 00:07:17.947 "is_configured": true, 00:07:17.947 "data_offset": 0, 00:07:17.947 "data_size": 65536 00:07:17.947 } 00:07:17.947 ] 00:07:17.947 }' 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.947 16:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.515 [2024-12-07 16:33:17.313293] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:18.515 [2024-12-07 16:33:17.313447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72336 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72336 ']' 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72336 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.515 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72336 00:07:18.777 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.777 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.777 killing process with pid 72336 00:07:18.777 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72336' 00:07:18.777 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72336 00:07:18.777 [2024-12-07 16:33:17.433615] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.777 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72336 00:07:18.777 [2024-12-07 16:33:17.435212] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:19.037 00:07:19.037 real 0m4.125s 00:07:19.037 user 0m6.375s 00:07:19.037 sys 0m0.831s 00:07:19.037 ************************************ 00:07:19.037 END TEST raid_state_function_test 00:07:19.037 ************************************ 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.037 16:33:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:19.037 16:33:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:19.037 16:33:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.037 16:33:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.037 ************************************ 00:07:19.037 START TEST raid_state_function_test_sb 00:07:19.037 ************************************ 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72578 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72578' 00:07:19.037 Process raid pid: 72578 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72578 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72578 ']' 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.037 16:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.297 [2024-12-07 16:33:17.965680] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:19.297 [2024-12-07 16:33:17.965900] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.297 [2024-12-07 16:33:18.125986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.557 [2024-12-07 16:33:18.201002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.557 [2024-12-07 16:33:18.279153] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.557 [2024-12-07 16:33:18.279193] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.126 [2024-12-07 16:33:18.803270] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.126 [2024-12-07 16:33:18.803430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.126 [2024-12-07 16:33:18.803448] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.126 [2024-12-07 16:33:18.803459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.126 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.127 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.127 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.127 16:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.127 16:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.127 16:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.127 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.127 "name": "Existed_Raid", 00:07:20.127 "uuid": "cb096bf3-d555-4174-8c6f-1972ba8ce23f", 00:07:20.127 "strip_size_kb": 64, 00:07:20.127 "state": "configuring", 00:07:20.127 "raid_level": "raid0", 00:07:20.127 "superblock": true, 00:07:20.127 "num_base_bdevs": 2, 00:07:20.127 "num_base_bdevs_discovered": 0, 00:07:20.127 "num_base_bdevs_operational": 2, 00:07:20.127 "base_bdevs_list": [ 00:07:20.127 { 00:07:20.127 "name": "BaseBdev1", 00:07:20.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.127 "is_configured": false, 00:07:20.127 "data_offset": 0, 00:07:20.127 "data_size": 0 00:07:20.127 }, 00:07:20.127 { 00:07:20.127 "name": "BaseBdev2", 00:07:20.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.127 "is_configured": false, 00:07:20.127 "data_offset": 0, 00:07:20.127 "data_size": 0 00:07:20.127 } 00:07:20.127 ] 00:07:20.127 }' 00:07:20.127 16:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.127 16:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.386 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.386 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.386 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.647 [2024-12-07 16:33:19.286324] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.647 [2024-12-07 16:33:19.286442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.647 [2024-12-07 16:33:19.298337] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.647 [2024-12-07 16:33:19.298428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.647 [2024-12-07 16:33:19.298453] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.647 [2024-12-07 16:33:19.298475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.647 [2024-12-07 16:33:19.321608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.647 BaseBdev1 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.647 [ 00:07:20.647 { 00:07:20.647 "name": "BaseBdev1", 00:07:20.647 "aliases": [ 00:07:20.647 "447c146a-c586-43ee-830d-75707fb1b821" 00:07:20.647 ], 00:07:20.647 "product_name": "Malloc disk", 00:07:20.647 "block_size": 512, 00:07:20.647 "num_blocks": 65536, 00:07:20.647 "uuid": "447c146a-c586-43ee-830d-75707fb1b821", 00:07:20.647 "assigned_rate_limits": { 00:07:20.647 "rw_ios_per_sec": 0, 00:07:20.647 "rw_mbytes_per_sec": 0, 00:07:20.647 "r_mbytes_per_sec": 0, 00:07:20.647 "w_mbytes_per_sec": 0 00:07:20.647 }, 00:07:20.647 "claimed": true, 00:07:20.647 "claim_type": "exclusive_write", 00:07:20.647 "zoned": false, 00:07:20.647 "supported_io_types": { 00:07:20.647 "read": true, 00:07:20.647 "write": true, 00:07:20.647 "unmap": true, 00:07:20.647 "flush": true, 00:07:20.647 "reset": true, 00:07:20.647 "nvme_admin": false, 00:07:20.647 "nvme_io": false, 00:07:20.647 "nvme_io_md": false, 00:07:20.647 "write_zeroes": true, 00:07:20.647 "zcopy": true, 00:07:20.647 "get_zone_info": false, 00:07:20.647 "zone_management": false, 00:07:20.647 "zone_append": false, 00:07:20.647 "compare": false, 00:07:20.647 "compare_and_write": false, 00:07:20.647 "abort": true, 00:07:20.647 "seek_hole": false, 00:07:20.647 "seek_data": false, 00:07:20.647 "copy": true, 00:07:20.647 "nvme_iov_md": false 00:07:20.647 }, 00:07:20.647 "memory_domains": [ 00:07:20.647 { 00:07:20.647 "dma_device_id": "system", 00:07:20.647 "dma_device_type": 1 00:07:20.647 }, 00:07:20.647 { 00:07:20.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.647 "dma_device_type": 2 00:07:20.647 } 00:07:20.647 ], 00:07:20.647 "driver_specific": {} 00:07:20.647 } 00:07:20.647 ] 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.647 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.647 "name": "Existed_Raid", 00:07:20.647 "uuid": "0f9d3281-9068-487c-8608-90b2ce340790", 00:07:20.647 "strip_size_kb": 64, 00:07:20.647 "state": "configuring", 00:07:20.647 "raid_level": "raid0", 00:07:20.647 "superblock": true, 00:07:20.647 "num_base_bdevs": 2, 00:07:20.647 "num_base_bdevs_discovered": 1, 00:07:20.647 "num_base_bdevs_operational": 2, 00:07:20.647 "base_bdevs_list": [ 00:07:20.647 { 00:07:20.647 "name": "BaseBdev1", 00:07:20.647 "uuid": "447c146a-c586-43ee-830d-75707fb1b821", 00:07:20.647 "is_configured": true, 00:07:20.647 "data_offset": 2048, 00:07:20.647 "data_size": 63488 00:07:20.647 }, 00:07:20.647 { 00:07:20.647 "name": "BaseBdev2", 00:07:20.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.647 "is_configured": false, 00:07:20.647 "data_offset": 0, 00:07:20.647 "data_size": 0 00:07:20.647 } 00:07:20.648 ] 00:07:20.648 }' 00:07:20.648 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.648 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.907 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.907 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.907 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.907 [2024-12-07 16:33:19.780902] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.907 [2024-12-07 16:33:19.780956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:20.907 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.907 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.907 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.908 [2024-12-07 16:33:19.792918] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.908 [2024-12-07 16:33:19.795064] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.908 [2024-12-07 16:33:19.795139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.908 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.167 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.167 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.167 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.167 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.167 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.167 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.167 "name": "Existed_Raid", 00:07:21.167 "uuid": "a348dc2a-beba-4f28-a5f6-44d4e1a03634", 00:07:21.167 "strip_size_kb": 64, 00:07:21.167 "state": "configuring", 00:07:21.167 "raid_level": "raid0", 00:07:21.167 "superblock": true, 00:07:21.167 "num_base_bdevs": 2, 00:07:21.167 "num_base_bdevs_discovered": 1, 00:07:21.167 "num_base_bdevs_operational": 2, 00:07:21.167 "base_bdevs_list": [ 00:07:21.167 { 00:07:21.167 "name": "BaseBdev1", 00:07:21.167 "uuid": "447c146a-c586-43ee-830d-75707fb1b821", 00:07:21.167 "is_configured": true, 00:07:21.167 "data_offset": 2048, 00:07:21.167 "data_size": 63488 00:07:21.167 }, 00:07:21.167 { 00:07:21.167 "name": "BaseBdev2", 00:07:21.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.167 "is_configured": false, 00:07:21.167 "data_offset": 0, 00:07:21.167 "data_size": 0 00:07:21.167 } 00:07:21.167 ] 00:07:21.167 }' 00:07:21.167 16:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.167 16:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.427 [2024-12-07 16:33:20.239602] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:21.427 [2024-12-07 16:33:20.239946] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:21.427 [2024-12-07 16:33:20.239970] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:21.427 [2024-12-07 16:33:20.240367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:21.427 BaseBdev2 00:07:21.427 [2024-12-07 16:33:20.240551] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:21.427 [2024-12-07 16:33:20.240577] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:21.427 [2024-12-07 16:33:20.240723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.427 [ 00:07:21.427 { 00:07:21.427 "name": "BaseBdev2", 00:07:21.427 "aliases": [ 00:07:21.427 "d030261a-fab4-4326-8e6e-19453b4a1889" 00:07:21.427 ], 00:07:21.427 "product_name": "Malloc disk", 00:07:21.427 "block_size": 512, 00:07:21.427 "num_blocks": 65536, 00:07:21.427 "uuid": "d030261a-fab4-4326-8e6e-19453b4a1889", 00:07:21.427 "assigned_rate_limits": { 00:07:21.427 "rw_ios_per_sec": 0, 00:07:21.427 "rw_mbytes_per_sec": 0, 00:07:21.427 "r_mbytes_per_sec": 0, 00:07:21.427 "w_mbytes_per_sec": 0 00:07:21.427 }, 00:07:21.427 "claimed": true, 00:07:21.427 "claim_type": "exclusive_write", 00:07:21.427 "zoned": false, 00:07:21.427 "supported_io_types": { 00:07:21.427 "read": true, 00:07:21.427 "write": true, 00:07:21.427 "unmap": true, 00:07:21.427 "flush": true, 00:07:21.427 "reset": true, 00:07:21.427 "nvme_admin": false, 00:07:21.427 "nvme_io": false, 00:07:21.427 "nvme_io_md": false, 00:07:21.427 "write_zeroes": true, 00:07:21.427 "zcopy": true, 00:07:21.427 "get_zone_info": false, 00:07:21.427 "zone_management": false, 00:07:21.427 "zone_append": false, 00:07:21.427 "compare": false, 00:07:21.427 "compare_and_write": false, 00:07:21.427 "abort": true, 00:07:21.427 "seek_hole": false, 00:07:21.427 "seek_data": false, 00:07:21.427 "copy": true, 00:07:21.427 "nvme_iov_md": false 00:07:21.427 }, 00:07:21.427 "memory_domains": [ 00:07:21.427 { 00:07:21.427 "dma_device_id": "system", 00:07:21.427 "dma_device_type": 1 00:07:21.427 }, 00:07:21.427 { 00:07:21.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.427 "dma_device_type": 2 00:07:21.427 } 00:07:21.427 ], 00:07:21.427 "driver_specific": {} 00:07:21.427 } 00:07:21.427 ] 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.427 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.428 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.428 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.428 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.428 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.428 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.428 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.686 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.686 "name": "Existed_Raid", 00:07:21.686 "uuid": "a348dc2a-beba-4f28-a5f6-44d4e1a03634", 00:07:21.686 "strip_size_kb": 64, 00:07:21.686 "state": "online", 00:07:21.686 "raid_level": "raid0", 00:07:21.686 "superblock": true, 00:07:21.686 "num_base_bdevs": 2, 00:07:21.686 "num_base_bdevs_discovered": 2, 00:07:21.686 "num_base_bdevs_operational": 2, 00:07:21.686 "base_bdevs_list": [ 00:07:21.686 { 00:07:21.686 "name": "BaseBdev1", 00:07:21.686 "uuid": "447c146a-c586-43ee-830d-75707fb1b821", 00:07:21.686 "is_configured": true, 00:07:21.686 "data_offset": 2048, 00:07:21.686 "data_size": 63488 00:07:21.686 }, 00:07:21.686 { 00:07:21.686 "name": "BaseBdev2", 00:07:21.686 "uuid": "d030261a-fab4-4326-8e6e-19453b4a1889", 00:07:21.686 "is_configured": true, 00:07:21.686 "data_offset": 2048, 00:07:21.686 "data_size": 63488 00:07:21.686 } 00:07:21.686 ] 00:07:21.686 }' 00:07:21.686 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.686 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.945 [2024-12-07 16:33:20.743078] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.945 "name": "Existed_Raid", 00:07:21.945 "aliases": [ 00:07:21.945 "a348dc2a-beba-4f28-a5f6-44d4e1a03634" 00:07:21.945 ], 00:07:21.945 "product_name": "Raid Volume", 00:07:21.945 "block_size": 512, 00:07:21.945 "num_blocks": 126976, 00:07:21.945 "uuid": "a348dc2a-beba-4f28-a5f6-44d4e1a03634", 00:07:21.945 "assigned_rate_limits": { 00:07:21.945 "rw_ios_per_sec": 0, 00:07:21.945 "rw_mbytes_per_sec": 0, 00:07:21.945 "r_mbytes_per_sec": 0, 00:07:21.945 "w_mbytes_per_sec": 0 00:07:21.945 }, 00:07:21.945 "claimed": false, 00:07:21.945 "zoned": false, 00:07:21.945 "supported_io_types": { 00:07:21.945 "read": true, 00:07:21.945 "write": true, 00:07:21.945 "unmap": true, 00:07:21.945 "flush": true, 00:07:21.945 "reset": true, 00:07:21.945 "nvme_admin": false, 00:07:21.945 "nvme_io": false, 00:07:21.945 "nvme_io_md": false, 00:07:21.945 "write_zeroes": true, 00:07:21.945 "zcopy": false, 00:07:21.945 "get_zone_info": false, 00:07:21.945 "zone_management": false, 00:07:21.945 "zone_append": false, 00:07:21.945 "compare": false, 00:07:21.945 "compare_and_write": false, 00:07:21.945 "abort": false, 00:07:21.945 "seek_hole": false, 00:07:21.945 "seek_data": false, 00:07:21.945 "copy": false, 00:07:21.945 "nvme_iov_md": false 00:07:21.945 }, 00:07:21.945 "memory_domains": [ 00:07:21.945 { 00:07:21.945 "dma_device_id": "system", 00:07:21.945 "dma_device_type": 1 00:07:21.945 }, 00:07:21.945 { 00:07:21.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.945 "dma_device_type": 2 00:07:21.945 }, 00:07:21.945 { 00:07:21.945 "dma_device_id": "system", 00:07:21.945 "dma_device_type": 1 00:07:21.945 }, 00:07:21.945 { 00:07:21.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.945 "dma_device_type": 2 00:07:21.945 } 00:07:21.945 ], 00:07:21.945 "driver_specific": { 00:07:21.945 "raid": { 00:07:21.945 "uuid": "a348dc2a-beba-4f28-a5f6-44d4e1a03634", 00:07:21.945 "strip_size_kb": 64, 00:07:21.945 "state": "online", 00:07:21.945 "raid_level": "raid0", 00:07:21.945 "superblock": true, 00:07:21.945 "num_base_bdevs": 2, 00:07:21.945 "num_base_bdevs_discovered": 2, 00:07:21.945 "num_base_bdevs_operational": 2, 00:07:21.945 "base_bdevs_list": [ 00:07:21.945 { 00:07:21.945 "name": "BaseBdev1", 00:07:21.945 "uuid": "447c146a-c586-43ee-830d-75707fb1b821", 00:07:21.945 "is_configured": true, 00:07:21.945 "data_offset": 2048, 00:07:21.945 "data_size": 63488 00:07:21.945 }, 00:07:21.945 { 00:07:21.945 "name": "BaseBdev2", 00:07:21.945 "uuid": "d030261a-fab4-4326-8e6e-19453b4a1889", 00:07:21.945 "is_configured": true, 00:07:21.945 "data_offset": 2048, 00:07:21.945 "data_size": 63488 00:07:21.945 } 00:07:21.945 ] 00:07:21.945 } 00:07:21.945 } 00:07:21.945 }' 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:21.945 BaseBdev2' 00:07:21.945 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.205 16:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.205 [2024-12-07 16:33:21.006466] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:22.205 [2024-12-07 16:33:21.006556] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.205 [2024-12-07 16:33:21.006629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.205 "name": "Existed_Raid", 00:07:22.205 "uuid": "a348dc2a-beba-4f28-a5f6-44d4e1a03634", 00:07:22.205 "strip_size_kb": 64, 00:07:22.205 "state": "offline", 00:07:22.205 "raid_level": "raid0", 00:07:22.205 "superblock": true, 00:07:22.205 "num_base_bdevs": 2, 00:07:22.205 "num_base_bdevs_discovered": 1, 00:07:22.205 "num_base_bdevs_operational": 1, 00:07:22.205 "base_bdevs_list": [ 00:07:22.205 { 00:07:22.205 "name": null, 00:07:22.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.205 "is_configured": false, 00:07:22.205 "data_offset": 0, 00:07:22.205 "data_size": 63488 00:07:22.205 }, 00:07:22.205 { 00:07:22.205 "name": "BaseBdev2", 00:07:22.205 "uuid": "d030261a-fab4-4326-8e6e-19453b4a1889", 00:07:22.205 "is_configured": true, 00:07:22.205 "data_offset": 2048, 00:07:22.205 "data_size": 63488 00:07:22.205 } 00:07:22.205 ] 00:07:22.205 }' 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.205 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.774 [2024-12-07 16:33:21.525704] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:22.774 [2024-12-07 16:33:21.525828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72578 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72578 ']' 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72578 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72578 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.774 killing process with pid 72578 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72578' 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72578 00:07:22.774 [2024-12-07 16:33:21.642863] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.774 16:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72578 00:07:22.774 [2024-12-07 16:33:21.644450] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.344 16:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:23.344 00:07:23.344 real 0m4.136s 00:07:23.344 user 0m6.347s 00:07:23.344 sys 0m0.837s 00:07:23.345 ************************************ 00:07:23.345 END TEST raid_state_function_test_sb 00:07:23.345 ************************************ 00:07:23.345 16:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.345 16:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.345 16:33:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:23.345 16:33:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:23.345 16:33:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.345 16:33:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.345 ************************************ 00:07:23.345 START TEST raid_superblock_test 00:07:23.345 ************************************ 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72819 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72819 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72819 ']' 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.345 16:33:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.345 [2024-12-07 16:33:22.180663] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:23.345 [2024-12-07 16:33:22.180891] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72819 ] 00:07:23.605 [2024-12-07 16:33:22.341425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.605 [2024-12-07 16:33:22.419078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.605 [2024-12-07 16:33:22.497012] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.605 [2024-12-07 16:33:22.497157] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.173 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.174 malloc1 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.174 [2024-12-07 16:33:23.048691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:24.174 [2024-12-07 16:33:23.048844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.174 [2024-12-07 16:33:23.048885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:24.174 [2024-12-07 16:33:23.048924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.174 [2024-12-07 16:33:23.051340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.174 [2024-12-07 16:33:23.051422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:24.174 pt1 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.174 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.433 malloc2 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.433 [2024-12-07 16:33:23.093737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:24.433 [2024-12-07 16:33:23.093849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.433 [2024-12-07 16:33:23.093887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:24.433 [2024-12-07 16:33:23.093916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.433 [2024-12-07 16:33:23.096494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.433 [2024-12-07 16:33:23.096563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:24.433 pt2 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.433 [2024-12-07 16:33:23.105778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:24.433 [2024-12-07 16:33:23.107869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:24.433 [2024-12-07 16:33:23.108048] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:24.433 [2024-12-07 16:33:23.108067] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:24.433 [2024-12-07 16:33:23.108322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:24.433 [2024-12-07 16:33:23.108464] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:24.433 [2024-12-07 16:33:23.108473] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:24.433 [2024-12-07 16:33:23.108595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.433 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.434 "name": "raid_bdev1", 00:07:24.434 "uuid": "2678f243-db9d-4683-9fc7-5298b479bb49", 00:07:24.434 "strip_size_kb": 64, 00:07:24.434 "state": "online", 00:07:24.434 "raid_level": "raid0", 00:07:24.434 "superblock": true, 00:07:24.434 "num_base_bdevs": 2, 00:07:24.434 "num_base_bdevs_discovered": 2, 00:07:24.434 "num_base_bdevs_operational": 2, 00:07:24.434 "base_bdevs_list": [ 00:07:24.434 { 00:07:24.434 "name": "pt1", 00:07:24.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:24.434 "is_configured": true, 00:07:24.434 "data_offset": 2048, 00:07:24.434 "data_size": 63488 00:07:24.434 }, 00:07:24.434 { 00:07:24.434 "name": "pt2", 00:07:24.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:24.434 "is_configured": true, 00:07:24.434 "data_offset": 2048, 00:07:24.434 "data_size": 63488 00:07:24.434 } 00:07:24.434 ] 00:07:24.434 }' 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.434 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.693 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:24.693 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:24.693 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:24.693 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:24.693 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:24.693 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:24.693 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:24.693 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:24.693 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.693 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.693 [2024-12-07 16:33:23.569235] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.952 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.952 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:24.952 "name": "raid_bdev1", 00:07:24.952 "aliases": [ 00:07:24.952 "2678f243-db9d-4683-9fc7-5298b479bb49" 00:07:24.952 ], 00:07:24.952 "product_name": "Raid Volume", 00:07:24.952 "block_size": 512, 00:07:24.952 "num_blocks": 126976, 00:07:24.952 "uuid": "2678f243-db9d-4683-9fc7-5298b479bb49", 00:07:24.952 "assigned_rate_limits": { 00:07:24.953 "rw_ios_per_sec": 0, 00:07:24.953 "rw_mbytes_per_sec": 0, 00:07:24.953 "r_mbytes_per_sec": 0, 00:07:24.953 "w_mbytes_per_sec": 0 00:07:24.953 }, 00:07:24.953 "claimed": false, 00:07:24.953 "zoned": false, 00:07:24.953 "supported_io_types": { 00:07:24.953 "read": true, 00:07:24.953 "write": true, 00:07:24.953 "unmap": true, 00:07:24.953 "flush": true, 00:07:24.953 "reset": true, 00:07:24.953 "nvme_admin": false, 00:07:24.953 "nvme_io": false, 00:07:24.953 "nvme_io_md": false, 00:07:24.953 "write_zeroes": true, 00:07:24.953 "zcopy": false, 00:07:24.953 "get_zone_info": false, 00:07:24.953 "zone_management": false, 00:07:24.953 "zone_append": false, 00:07:24.953 "compare": false, 00:07:24.953 "compare_and_write": false, 00:07:24.953 "abort": false, 00:07:24.953 "seek_hole": false, 00:07:24.953 "seek_data": false, 00:07:24.953 "copy": false, 00:07:24.953 "nvme_iov_md": false 00:07:24.953 }, 00:07:24.953 "memory_domains": [ 00:07:24.953 { 00:07:24.953 "dma_device_id": "system", 00:07:24.953 "dma_device_type": 1 00:07:24.953 }, 00:07:24.953 { 00:07:24.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.953 "dma_device_type": 2 00:07:24.953 }, 00:07:24.953 { 00:07:24.953 "dma_device_id": "system", 00:07:24.953 "dma_device_type": 1 00:07:24.953 }, 00:07:24.953 { 00:07:24.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.953 "dma_device_type": 2 00:07:24.953 } 00:07:24.953 ], 00:07:24.953 "driver_specific": { 00:07:24.953 "raid": { 00:07:24.953 "uuid": "2678f243-db9d-4683-9fc7-5298b479bb49", 00:07:24.953 "strip_size_kb": 64, 00:07:24.953 "state": "online", 00:07:24.953 "raid_level": "raid0", 00:07:24.953 "superblock": true, 00:07:24.953 "num_base_bdevs": 2, 00:07:24.953 "num_base_bdevs_discovered": 2, 00:07:24.953 "num_base_bdevs_operational": 2, 00:07:24.953 "base_bdevs_list": [ 00:07:24.953 { 00:07:24.953 "name": "pt1", 00:07:24.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:24.953 "is_configured": true, 00:07:24.953 "data_offset": 2048, 00:07:24.953 "data_size": 63488 00:07:24.953 }, 00:07:24.953 { 00:07:24.953 "name": "pt2", 00:07:24.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:24.953 "is_configured": true, 00:07:24.953 "data_offset": 2048, 00:07:24.953 "data_size": 63488 00:07:24.953 } 00:07:24.953 ] 00:07:24.953 } 00:07:24.953 } 00:07:24.953 }' 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:24.953 pt2' 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:24.953 [2024-12-07 16:33:23.784740] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2678f243-db9d-4683-9fc7-5298b479bb49 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2678f243-db9d-4683-9fc7-5298b479bb49 ']' 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.953 [2024-12-07 16:33:23.836444] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:24.953 [2024-12-07 16:33:23.836475] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.953 [2024-12-07 16:33:23.836557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.953 [2024-12-07 16:33:23.836617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.953 [2024-12-07 16:33:23.836636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.953 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.213 [2024-12-07 16:33:23.980248] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:25.213 [2024-12-07 16:33:23.982455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:25.213 [2024-12-07 16:33:23.982566] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:25.213 [2024-12-07 16:33:23.982648] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:25.213 [2024-12-07 16:33:23.982687] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.213 [2024-12-07 16:33:23.982708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:25.213 request: 00:07:25.213 { 00:07:25.213 "name": "raid_bdev1", 00:07:25.213 "raid_level": "raid0", 00:07:25.213 "base_bdevs": [ 00:07:25.213 "malloc1", 00:07:25.213 "malloc2" 00:07:25.213 ], 00:07:25.213 "strip_size_kb": 64, 00:07:25.213 "superblock": false, 00:07:25.213 "method": "bdev_raid_create", 00:07:25.213 "req_id": 1 00:07:25.213 } 00:07:25.213 Got JSON-RPC error response 00:07:25.213 response: 00:07:25.213 { 00:07:25.213 "code": -17, 00:07:25.213 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:25.213 } 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.213 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.213 [2024-12-07 16:33:24.044097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:25.213 [2024-12-07 16:33:24.044181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.213 [2024-12-07 16:33:24.044209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:25.213 [2024-12-07 16:33:24.044218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.213 [2024-12-07 16:33:24.046624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.213 [2024-12-07 16:33:24.046657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:25.213 [2024-12-07 16:33:24.046720] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:25.213 [2024-12-07 16:33:24.046759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:25.213 pt1 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.213 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.214 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.214 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.214 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.214 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.214 "name": "raid_bdev1", 00:07:25.214 "uuid": "2678f243-db9d-4683-9fc7-5298b479bb49", 00:07:25.214 "strip_size_kb": 64, 00:07:25.214 "state": "configuring", 00:07:25.214 "raid_level": "raid0", 00:07:25.214 "superblock": true, 00:07:25.214 "num_base_bdevs": 2, 00:07:25.214 "num_base_bdevs_discovered": 1, 00:07:25.214 "num_base_bdevs_operational": 2, 00:07:25.214 "base_bdevs_list": [ 00:07:25.214 { 00:07:25.214 "name": "pt1", 00:07:25.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.214 "is_configured": true, 00:07:25.214 "data_offset": 2048, 00:07:25.214 "data_size": 63488 00:07:25.214 }, 00:07:25.214 { 00:07:25.214 "name": null, 00:07:25.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.214 "is_configured": false, 00:07:25.214 "data_offset": 2048, 00:07:25.214 "data_size": 63488 00:07:25.214 } 00:07:25.214 ] 00:07:25.214 }' 00:07:25.214 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.214 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.783 [2024-12-07 16:33:24.475473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:25.783 [2024-12-07 16:33:24.475592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.783 [2024-12-07 16:33:24.475634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:25.783 [2024-12-07 16:33:24.475662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.783 [2024-12-07 16:33:24.476109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.783 [2024-12-07 16:33:24.476162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:25.783 [2024-12-07 16:33:24.476254] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:25.783 [2024-12-07 16:33:24.476298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:25.783 [2024-12-07 16:33:24.476414] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:25.783 [2024-12-07 16:33:24.476450] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:25.783 [2024-12-07 16:33:24.476714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:25.783 [2024-12-07 16:33:24.476858] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:25.783 [2024-12-07 16:33:24.476903] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:25.783 [2024-12-07 16:33:24.477035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.783 pt2 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.783 "name": "raid_bdev1", 00:07:25.783 "uuid": "2678f243-db9d-4683-9fc7-5298b479bb49", 00:07:25.783 "strip_size_kb": 64, 00:07:25.783 "state": "online", 00:07:25.783 "raid_level": "raid0", 00:07:25.783 "superblock": true, 00:07:25.783 "num_base_bdevs": 2, 00:07:25.783 "num_base_bdevs_discovered": 2, 00:07:25.783 "num_base_bdevs_operational": 2, 00:07:25.783 "base_bdevs_list": [ 00:07:25.783 { 00:07:25.783 "name": "pt1", 00:07:25.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.783 "is_configured": true, 00:07:25.783 "data_offset": 2048, 00:07:25.783 "data_size": 63488 00:07:25.783 }, 00:07:25.783 { 00:07:25.783 "name": "pt2", 00:07:25.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.783 "is_configured": true, 00:07:25.783 "data_offset": 2048, 00:07:25.783 "data_size": 63488 00:07:25.783 } 00:07:25.783 ] 00:07:25.783 }' 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.783 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.043 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:26.043 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:26.043 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.043 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.043 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.043 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.043 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:26.043 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.043 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.043 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.043 [2024-12-07 16:33:24.938935] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.303 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.303 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.303 "name": "raid_bdev1", 00:07:26.303 "aliases": [ 00:07:26.303 "2678f243-db9d-4683-9fc7-5298b479bb49" 00:07:26.303 ], 00:07:26.303 "product_name": "Raid Volume", 00:07:26.303 "block_size": 512, 00:07:26.303 "num_blocks": 126976, 00:07:26.303 "uuid": "2678f243-db9d-4683-9fc7-5298b479bb49", 00:07:26.303 "assigned_rate_limits": { 00:07:26.303 "rw_ios_per_sec": 0, 00:07:26.303 "rw_mbytes_per_sec": 0, 00:07:26.303 "r_mbytes_per_sec": 0, 00:07:26.303 "w_mbytes_per_sec": 0 00:07:26.303 }, 00:07:26.303 "claimed": false, 00:07:26.303 "zoned": false, 00:07:26.303 "supported_io_types": { 00:07:26.303 "read": true, 00:07:26.303 "write": true, 00:07:26.303 "unmap": true, 00:07:26.303 "flush": true, 00:07:26.303 "reset": true, 00:07:26.303 "nvme_admin": false, 00:07:26.303 "nvme_io": false, 00:07:26.303 "nvme_io_md": false, 00:07:26.303 "write_zeroes": true, 00:07:26.303 "zcopy": false, 00:07:26.303 "get_zone_info": false, 00:07:26.303 "zone_management": false, 00:07:26.303 "zone_append": false, 00:07:26.303 "compare": false, 00:07:26.303 "compare_and_write": false, 00:07:26.303 "abort": false, 00:07:26.303 "seek_hole": false, 00:07:26.303 "seek_data": false, 00:07:26.303 "copy": false, 00:07:26.303 "nvme_iov_md": false 00:07:26.303 }, 00:07:26.303 "memory_domains": [ 00:07:26.303 { 00:07:26.303 "dma_device_id": "system", 00:07:26.303 "dma_device_type": 1 00:07:26.303 }, 00:07:26.303 { 00:07:26.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.303 "dma_device_type": 2 00:07:26.303 }, 00:07:26.303 { 00:07:26.303 "dma_device_id": "system", 00:07:26.303 "dma_device_type": 1 00:07:26.303 }, 00:07:26.303 { 00:07:26.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.303 "dma_device_type": 2 00:07:26.303 } 00:07:26.303 ], 00:07:26.303 "driver_specific": { 00:07:26.303 "raid": { 00:07:26.303 "uuid": "2678f243-db9d-4683-9fc7-5298b479bb49", 00:07:26.303 "strip_size_kb": 64, 00:07:26.303 "state": "online", 00:07:26.303 "raid_level": "raid0", 00:07:26.303 "superblock": true, 00:07:26.303 "num_base_bdevs": 2, 00:07:26.303 "num_base_bdevs_discovered": 2, 00:07:26.303 "num_base_bdevs_operational": 2, 00:07:26.303 "base_bdevs_list": [ 00:07:26.303 { 00:07:26.303 "name": "pt1", 00:07:26.303 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.303 "is_configured": true, 00:07:26.303 "data_offset": 2048, 00:07:26.303 "data_size": 63488 00:07:26.303 }, 00:07:26.303 { 00:07:26.303 "name": "pt2", 00:07:26.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.303 "is_configured": true, 00:07:26.303 "data_offset": 2048, 00:07:26.303 "data_size": 63488 00:07:26.303 } 00:07:26.303 ] 00:07:26.303 } 00:07:26.303 } 00:07:26.303 }' 00:07:26.303 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:26.303 pt2' 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.303 [2024-12-07 16:33:25.174515] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.303 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.562 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2678f243-db9d-4683-9fc7-5298b479bb49 '!=' 2678f243-db9d-4683-9fc7-5298b479bb49 ']' 00:07:26.562 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:26.562 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.562 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:26.563 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72819 00:07:26.563 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72819 ']' 00:07:26.563 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72819 00:07:26.563 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:26.563 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.563 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72819 00:07:26.563 killing process with pid 72819 00:07:26.563 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.563 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.563 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72819' 00:07:26.563 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72819 00:07:26.563 [2024-12-07 16:33:25.255492] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.563 [2024-12-07 16:33:25.255581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.563 [2024-12-07 16:33:25.255633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.563 [2024-12-07 16:33:25.255643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:26.563 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72819 00:07:26.563 [2024-12-07 16:33:25.297317] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.821 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:26.821 00:07:26.821 real 0m3.580s 00:07:26.821 user 0m5.334s 00:07:26.821 sys 0m0.816s 00:07:26.821 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.821 ************************************ 00:07:26.821 END TEST raid_superblock_test 00:07:26.821 ************************************ 00:07:26.821 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.821 16:33:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:26.821 16:33:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:26.821 16:33:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.821 16:33:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.080 ************************************ 00:07:27.080 START TEST raid_read_error_test 00:07:27.080 ************************************ 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eBfl6ocpX6 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73025 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73025 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73025 ']' 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.080 16:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.080 [2024-12-07 16:33:25.824609] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:27.080 [2024-12-07 16:33:25.824826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73025 ] 00:07:27.339 [2024-12-07 16:33:25.985246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.339 [2024-12-07 16:33:26.059637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.339 [2024-12-07 16:33:26.137403] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.339 [2024-12-07 16:33:26.137446] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.908 BaseBdev1_malloc 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.908 true 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.908 [2024-12-07 16:33:26.684729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:27.908 [2024-12-07 16:33:26.684867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.908 [2024-12-07 16:33:26.684911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:27.908 [2024-12-07 16:33:26.684939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.908 [2024-12-07 16:33:26.687383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.908 [2024-12-07 16:33:26.687450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:27.908 BaseBdev1 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.908 BaseBdev2_malloc 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.908 true 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.908 [2024-12-07 16:33:26.743629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:27.908 [2024-12-07 16:33:26.743680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.908 [2024-12-07 16:33:26.743698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:27.908 [2024-12-07 16:33:26.743707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.908 [2024-12-07 16:33:26.745921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.908 [2024-12-07 16:33:26.746005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:27.908 BaseBdev2 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.908 [2024-12-07 16:33:26.755680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.908 [2024-12-07 16:33:26.757704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:27.908 [2024-12-07 16:33:26.757905] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:27.908 [2024-12-07 16:33:26.757918] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:27.908 [2024-12-07 16:33:26.758164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:27.908 [2024-12-07 16:33:26.758285] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:27.908 [2024-12-07 16:33:26.758297] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:27.908 [2024-12-07 16:33:26.758432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.908 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.167 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.168 "name": "raid_bdev1", 00:07:28.168 "uuid": "a81e4731-4d3e-4678-95ef-b62587260b52", 00:07:28.168 "strip_size_kb": 64, 00:07:28.168 "state": "online", 00:07:28.168 "raid_level": "raid0", 00:07:28.168 "superblock": true, 00:07:28.168 "num_base_bdevs": 2, 00:07:28.168 "num_base_bdevs_discovered": 2, 00:07:28.168 "num_base_bdevs_operational": 2, 00:07:28.168 "base_bdevs_list": [ 00:07:28.168 { 00:07:28.168 "name": "BaseBdev1", 00:07:28.168 "uuid": "71e25da1-582a-5d25-af2f-1d703b8f92e3", 00:07:28.168 "is_configured": true, 00:07:28.168 "data_offset": 2048, 00:07:28.168 "data_size": 63488 00:07:28.168 }, 00:07:28.168 { 00:07:28.168 "name": "BaseBdev2", 00:07:28.168 "uuid": "59081f7f-83a1-592b-88d0-d3a1b5151e6d", 00:07:28.168 "is_configured": true, 00:07:28.168 "data_offset": 2048, 00:07:28.168 "data_size": 63488 00:07:28.168 } 00:07:28.168 ] 00:07:28.168 }' 00:07:28.168 16:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.168 16:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.427 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:28.427 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:28.427 [2024-12-07 16:33:27.291267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.366 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.626 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.626 "name": "raid_bdev1", 00:07:29.626 "uuid": "a81e4731-4d3e-4678-95ef-b62587260b52", 00:07:29.626 "strip_size_kb": 64, 00:07:29.626 "state": "online", 00:07:29.626 "raid_level": "raid0", 00:07:29.626 "superblock": true, 00:07:29.626 "num_base_bdevs": 2, 00:07:29.626 "num_base_bdevs_discovered": 2, 00:07:29.626 "num_base_bdevs_operational": 2, 00:07:29.626 "base_bdevs_list": [ 00:07:29.626 { 00:07:29.626 "name": "BaseBdev1", 00:07:29.626 "uuid": "71e25da1-582a-5d25-af2f-1d703b8f92e3", 00:07:29.626 "is_configured": true, 00:07:29.626 "data_offset": 2048, 00:07:29.626 "data_size": 63488 00:07:29.626 }, 00:07:29.626 { 00:07:29.626 "name": "BaseBdev2", 00:07:29.626 "uuid": "59081f7f-83a1-592b-88d0-d3a1b5151e6d", 00:07:29.626 "is_configured": true, 00:07:29.626 "data_offset": 2048, 00:07:29.626 "data_size": 63488 00:07:29.626 } 00:07:29.626 ] 00:07:29.626 }' 00:07:29.626 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.626 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.885 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:29.885 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.885 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.885 [2024-12-07 16:33:28.683620] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:29.885 [2024-12-07 16:33:28.683758] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.886 [2024-12-07 16:33:28.686247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.886 [2024-12-07 16:33:28.686300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.886 [2024-12-07 16:33:28.686351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.886 [2024-12-07 16:33:28.686362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:29.886 { 00:07:29.886 "results": [ 00:07:29.886 { 00:07:29.886 "job": "raid_bdev1", 00:07:29.886 "core_mask": "0x1", 00:07:29.886 "workload": "randrw", 00:07:29.886 "percentage": 50, 00:07:29.886 "status": "finished", 00:07:29.886 "queue_depth": 1, 00:07:29.886 "io_size": 131072, 00:07:29.886 "runtime": 1.393195, 00:07:29.886 "iops": 15830.519058710375, 00:07:29.886 "mibps": 1978.814882338797, 00:07:29.886 "io_failed": 1, 00:07:29.886 "io_timeout": 0, 00:07:29.886 "avg_latency_us": 88.6815635626979, 00:07:29.886 "min_latency_us": 24.482096069868994, 00:07:29.886 "max_latency_us": 1387.989519650655 00:07:29.886 } 00:07:29.886 ], 00:07:29.886 "core_count": 1 00:07:29.886 } 00:07:29.886 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.886 16:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73025 00:07:29.886 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73025 ']' 00:07:29.886 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73025 00:07:29.886 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:29.886 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.886 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73025 00:07:29.886 killing process with pid 73025 00:07:29.886 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.886 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.886 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73025' 00:07:29.886 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73025 00:07:29.886 [2024-12-07 16:33:28.729370] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:29.886 16:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73025 00:07:29.886 [2024-12-07 16:33:28.757565] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.453 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:30.454 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eBfl6ocpX6 00:07:30.454 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:30.454 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:30.454 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:30.454 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.454 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.454 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:30.454 00:07:30.454 real 0m3.417s 00:07:30.454 user 0m4.197s 00:07:30.454 sys 0m0.616s 00:07:30.454 16:33:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.454 ************************************ 00:07:30.454 END TEST raid_read_error_test 00:07:30.454 ************************************ 00:07:30.454 16:33:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.454 16:33:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:30.454 16:33:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:30.454 16:33:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.454 16:33:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.454 ************************************ 00:07:30.454 START TEST raid_write_error_test 00:07:30.454 ************************************ 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wWCbQm24nM 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73154 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73154 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73154 ']' 00:07:30.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.454 16:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.454 [2024-12-07 16:33:29.303806] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:30.454 [2024-12-07 16:33:29.304401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73154 ] 00:07:30.712 [2024-12-07 16:33:29.462449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.712 [2024-12-07 16:33:29.536869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.971 [2024-12-07 16:33:29.615052] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.971 [2024-12-07 16:33:29.615113] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.541 BaseBdev1_malloc 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.541 true 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.541 [2024-12-07 16:33:30.170450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:31.541 [2024-12-07 16:33:30.170516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.541 [2024-12-07 16:33:30.170540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:31.541 [2024-12-07 16:33:30.170551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.541 [2024-12-07 16:33:30.172949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.541 [2024-12-07 16:33:30.172980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:31.541 BaseBdev1 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.541 BaseBdev2_malloc 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.541 true 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.541 [2024-12-07 16:33:30.226489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:31.541 [2024-12-07 16:33:30.226539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.541 [2024-12-07 16:33:30.226557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:31.541 [2024-12-07 16:33:30.226566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.541 [2024-12-07 16:33:30.228909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.541 [2024-12-07 16:33:30.229015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:31.541 BaseBdev2 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.541 [2024-12-07 16:33:30.238520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.541 [2024-12-07 16:33:30.240689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:31.541 [2024-12-07 16:33:30.240865] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:31.541 [2024-12-07 16:33:30.240878] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.541 [2024-12-07 16:33:30.241145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:31.541 [2024-12-07 16:33:30.241285] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:31.541 [2024-12-07 16:33:30.241298] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:31.541 [2024-12-07 16:33:30.241439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.541 "name": "raid_bdev1", 00:07:31.541 "uuid": "2bb06247-f978-4602-a20f-8d82ce75061d", 00:07:31.541 "strip_size_kb": 64, 00:07:31.541 "state": "online", 00:07:31.541 "raid_level": "raid0", 00:07:31.541 "superblock": true, 00:07:31.541 "num_base_bdevs": 2, 00:07:31.541 "num_base_bdevs_discovered": 2, 00:07:31.541 "num_base_bdevs_operational": 2, 00:07:31.541 "base_bdevs_list": [ 00:07:31.541 { 00:07:31.541 "name": "BaseBdev1", 00:07:31.541 "uuid": "6a9a7ba4-546d-5e45-9feb-0c33084e01e2", 00:07:31.541 "is_configured": true, 00:07:31.541 "data_offset": 2048, 00:07:31.541 "data_size": 63488 00:07:31.541 }, 00:07:31.541 { 00:07:31.541 "name": "BaseBdev2", 00:07:31.541 "uuid": "624c58ac-e1a5-57db-bd65-47a0b8a5a6ab", 00:07:31.541 "is_configured": true, 00:07:31.541 "data_offset": 2048, 00:07:31.541 "data_size": 63488 00:07:31.541 } 00:07:31.541 ] 00:07:31.541 }' 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.541 16:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.801 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:31.801 16:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:32.062 [2024-12-07 16:33:30.714065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:32.735 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:32.735 16:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.735 16:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.995 16:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.996 "name": "raid_bdev1", 00:07:32.996 "uuid": "2bb06247-f978-4602-a20f-8d82ce75061d", 00:07:32.996 "strip_size_kb": 64, 00:07:32.996 "state": "online", 00:07:32.996 "raid_level": "raid0", 00:07:32.996 "superblock": true, 00:07:32.996 "num_base_bdevs": 2, 00:07:32.996 "num_base_bdevs_discovered": 2, 00:07:32.996 "num_base_bdevs_operational": 2, 00:07:32.996 "base_bdevs_list": [ 00:07:32.996 { 00:07:32.996 "name": "BaseBdev1", 00:07:32.996 "uuid": "6a9a7ba4-546d-5e45-9feb-0c33084e01e2", 00:07:32.996 "is_configured": true, 00:07:32.996 "data_offset": 2048, 00:07:32.996 "data_size": 63488 00:07:32.996 }, 00:07:32.996 { 00:07:32.996 "name": "BaseBdev2", 00:07:32.996 "uuid": "624c58ac-e1a5-57db-bd65-47a0b8a5a6ab", 00:07:32.996 "is_configured": true, 00:07:32.996 "data_offset": 2048, 00:07:32.996 "data_size": 63488 00:07:32.996 } 00:07:32.996 ] 00:07:32.996 }' 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.996 16:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.255 16:33:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:33.255 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.255 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.255 [2024-12-07 16:33:32.118456] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.255 [2024-12-07 16:33:32.118502] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.255 [2024-12-07 16:33:32.121039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.255 [2024-12-07 16:33:32.121085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.255 [2024-12-07 16:33:32.121121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.255 [2024-12-07 16:33:32.121131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:33.255 { 00:07:33.255 "results": [ 00:07:33.255 { 00:07:33.255 "job": "raid_bdev1", 00:07:33.255 "core_mask": "0x1", 00:07:33.255 "workload": "randrw", 00:07:33.255 "percentage": 50, 00:07:33.255 "status": "finished", 00:07:33.255 "queue_depth": 1, 00:07:33.255 "io_size": 131072, 00:07:33.255 "runtime": 1.405025, 00:07:33.255 "iops": 15920.713154570203, 00:07:33.255 "mibps": 1990.0891443212754, 00:07:33.255 "io_failed": 1, 00:07:33.255 "io_timeout": 0, 00:07:33.255 "avg_latency_us": 88.00578753906609, 00:07:33.255 "min_latency_us": 24.482096069868994, 00:07:33.255 "max_latency_us": 1273.5161572052402 00:07:33.255 } 00:07:33.255 ], 00:07:33.255 "core_count": 1 00:07:33.256 } 00:07:33.256 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.256 16:33:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73154 00:07:33.256 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73154 ']' 00:07:33.256 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73154 00:07:33.256 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:33.256 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.256 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73154 00:07:33.515 killing process with pid 73154 00:07:33.515 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.515 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.515 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73154' 00:07:33.515 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73154 00:07:33.515 [2024-12-07 16:33:32.154213] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.515 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73154 00:07:33.515 [2024-12-07 16:33:32.182178] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:33.776 16:33:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wWCbQm24nM 00:07:33.776 16:33:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:33.776 16:33:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:33.776 16:33:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:33.776 16:33:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:33.776 ************************************ 00:07:33.776 END TEST raid_write_error_test 00:07:33.776 ************************************ 00:07:33.776 16:33:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:33.776 16:33:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:33.776 16:33:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:33.776 00:07:33.776 real 0m3.350s 00:07:33.776 user 0m4.090s 00:07:33.776 sys 0m0.600s 00:07:33.776 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.776 16:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.776 16:33:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:33.776 16:33:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:33.776 16:33:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:33.776 16:33:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.776 16:33:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.776 ************************************ 00:07:33.776 START TEST raid_state_function_test 00:07:33.776 ************************************ 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73281 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73281' 00:07:33.776 Process raid pid: 73281 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73281 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73281 ']' 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.776 16:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.036 [2024-12-07 16:33:32.725942] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:34.036 [2024-12-07 16:33:32.726156] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.036 [2024-12-07 16:33:32.891422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.296 [2024-12-07 16:33:32.963855] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.296 [2024-12-07 16:33:33.043022] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.296 [2024-12-07 16:33:33.043171] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.865 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.865 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:34.865 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:34.865 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.865 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.865 [2024-12-07 16:33:33.547926] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:34.865 [2024-12-07 16:33:33.548082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:34.865 [2024-12-07 16:33:33.548116] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:34.865 [2024-12-07 16:33:33.548141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:34.865 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.865 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:34.865 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.865 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:34.865 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.866 "name": "Existed_Raid", 00:07:34.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.866 "strip_size_kb": 64, 00:07:34.866 "state": "configuring", 00:07:34.866 "raid_level": "concat", 00:07:34.866 "superblock": false, 00:07:34.866 "num_base_bdevs": 2, 00:07:34.866 "num_base_bdevs_discovered": 0, 00:07:34.866 "num_base_bdevs_operational": 2, 00:07:34.866 "base_bdevs_list": [ 00:07:34.866 { 00:07:34.866 "name": "BaseBdev1", 00:07:34.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.866 "is_configured": false, 00:07:34.866 "data_offset": 0, 00:07:34.866 "data_size": 0 00:07:34.866 }, 00:07:34.866 { 00:07:34.866 "name": "BaseBdev2", 00:07:34.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.866 "is_configured": false, 00:07:34.866 "data_offset": 0, 00:07:34.866 "data_size": 0 00:07:34.866 } 00:07:34.866 ] 00:07:34.866 }' 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.866 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.126 [2024-12-07 16:33:33.943149] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:35.126 [2024-12-07 16:33:33.943256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.126 [2024-12-07 16:33:33.955170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:35.126 [2024-12-07 16:33:33.955246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:35.126 [2024-12-07 16:33:33.955271] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.126 [2024-12-07 16:33:33.955295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.126 [2024-12-07 16:33:33.981956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.126 BaseBdev1 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.126 16:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.126 [ 00:07:35.126 { 00:07:35.126 "name": "BaseBdev1", 00:07:35.126 "aliases": [ 00:07:35.126 "df9923d5-45e9-475e-a39d-fd1b20ffeb40" 00:07:35.126 ], 00:07:35.126 "product_name": "Malloc disk", 00:07:35.126 "block_size": 512, 00:07:35.126 "num_blocks": 65536, 00:07:35.126 "uuid": "df9923d5-45e9-475e-a39d-fd1b20ffeb40", 00:07:35.126 "assigned_rate_limits": { 00:07:35.126 "rw_ios_per_sec": 0, 00:07:35.126 "rw_mbytes_per_sec": 0, 00:07:35.126 "r_mbytes_per_sec": 0, 00:07:35.126 "w_mbytes_per_sec": 0 00:07:35.126 }, 00:07:35.126 "claimed": true, 00:07:35.126 "claim_type": "exclusive_write", 00:07:35.126 "zoned": false, 00:07:35.126 "supported_io_types": { 00:07:35.126 "read": true, 00:07:35.126 "write": true, 00:07:35.126 "unmap": true, 00:07:35.126 "flush": true, 00:07:35.126 "reset": true, 00:07:35.126 "nvme_admin": false, 00:07:35.126 "nvme_io": false, 00:07:35.126 "nvme_io_md": false, 00:07:35.126 "write_zeroes": true, 00:07:35.126 "zcopy": true, 00:07:35.126 "get_zone_info": false, 00:07:35.126 "zone_management": false, 00:07:35.126 "zone_append": false, 00:07:35.126 "compare": false, 00:07:35.126 "compare_and_write": false, 00:07:35.126 "abort": true, 00:07:35.126 "seek_hole": false, 00:07:35.126 "seek_data": false, 00:07:35.126 "copy": true, 00:07:35.126 "nvme_iov_md": false 00:07:35.126 }, 00:07:35.126 "memory_domains": [ 00:07:35.126 { 00:07:35.126 "dma_device_id": "system", 00:07:35.126 "dma_device_type": 1 00:07:35.126 }, 00:07:35.126 { 00:07:35.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.126 "dma_device_type": 2 00:07:35.126 } 00:07:35.126 ], 00:07:35.126 "driver_specific": {} 00:07:35.126 } 00:07:35.126 ] 00:07:35.126 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.126 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:35.126 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:35.126 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.126 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.126 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.126 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.126 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.127 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.127 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.127 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.127 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.386 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.386 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.386 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.386 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.386 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.386 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.386 "name": "Existed_Raid", 00:07:35.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.386 "strip_size_kb": 64, 00:07:35.386 "state": "configuring", 00:07:35.386 "raid_level": "concat", 00:07:35.386 "superblock": false, 00:07:35.386 "num_base_bdevs": 2, 00:07:35.386 "num_base_bdevs_discovered": 1, 00:07:35.386 "num_base_bdevs_operational": 2, 00:07:35.386 "base_bdevs_list": [ 00:07:35.386 { 00:07:35.386 "name": "BaseBdev1", 00:07:35.386 "uuid": "df9923d5-45e9-475e-a39d-fd1b20ffeb40", 00:07:35.386 "is_configured": true, 00:07:35.386 "data_offset": 0, 00:07:35.386 "data_size": 65536 00:07:35.386 }, 00:07:35.386 { 00:07:35.386 "name": "BaseBdev2", 00:07:35.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.386 "is_configured": false, 00:07:35.386 "data_offset": 0, 00:07:35.386 "data_size": 0 00:07:35.386 } 00:07:35.386 ] 00:07:35.387 }' 00:07:35.387 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.387 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.646 [2024-12-07 16:33:34.425191] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:35.646 [2024-12-07 16:33:34.425285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.646 [2024-12-07 16:33:34.433217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.646 [2024-12-07 16:33:34.435307] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.646 [2024-12-07 16:33:34.435395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.646 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.646 "name": "Existed_Raid", 00:07:35.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.646 "strip_size_kb": 64, 00:07:35.646 "state": "configuring", 00:07:35.646 "raid_level": "concat", 00:07:35.646 "superblock": false, 00:07:35.646 "num_base_bdevs": 2, 00:07:35.646 "num_base_bdevs_discovered": 1, 00:07:35.646 "num_base_bdevs_operational": 2, 00:07:35.646 "base_bdevs_list": [ 00:07:35.646 { 00:07:35.646 "name": "BaseBdev1", 00:07:35.646 "uuid": "df9923d5-45e9-475e-a39d-fd1b20ffeb40", 00:07:35.646 "is_configured": true, 00:07:35.646 "data_offset": 0, 00:07:35.646 "data_size": 65536 00:07:35.647 }, 00:07:35.647 { 00:07:35.647 "name": "BaseBdev2", 00:07:35.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.647 "is_configured": false, 00:07:35.647 "data_offset": 0, 00:07:35.647 "data_size": 0 00:07:35.647 } 00:07:35.647 ] 00:07:35.647 }' 00:07:35.647 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.647 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.215 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:36.215 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.215 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.216 [2024-12-07 16:33:34.883923] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:36.216 [2024-12-07 16:33:34.884221] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:36.216 [2024-12-07 16:33:34.884297] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:36.216 [2024-12-07 16:33:34.885431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:36.216 [2024-12-07 16:33:34.885895] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:36.216 [2024-12-07 16:33:34.885969] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:36.216 [2024-12-07 16:33:34.886632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.216 BaseBdev2 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.216 [ 00:07:36.216 { 00:07:36.216 "name": "BaseBdev2", 00:07:36.216 "aliases": [ 00:07:36.216 "728d699c-3c97-4d56-8949-80d41c2225d4" 00:07:36.216 ], 00:07:36.216 "product_name": "Malloc disk", 00:07:36.216 "block_size": 512, 00:07:36.216 "num_blocks": 65536, 00:07:36.216 "uuid": "728d699c-3c97-4d56-8949-80d41c2225d4", 00:07:36.216 "assigned_rate_limits": { 00:07:36.216 "rw_ios_per_sec": 0, 00:07:36.216 "rw_mbytes_per_sec": 0, 00:07:36.216 "r_mbytes_per_sec": 0, 00:07:36.216 "w_mbytes_per_sec": 0 00:07:36.216 }, 00:07:36.216 "claimed": true, 00:07:36.216 "claim_type": "exclusive_write", 00:07:36.216 "zoned": false, 00:07:36.216 "supported_io_types": { 00:07:36.216 "read": true, 00:07:36.216 "write": true, 00:07:36.216 "unmap": true, 00:07:36.216 "flush": true, 00:07:36.216 "reset": true, 00:07:36.216 "nvme_admin": false, 00:07:36.216 "nvme_io": false, 00:07:36.216 "nvme_io_md": false, 00:07:36.216 "write_zeroes": true, 00:07:36.216 "zcopy": true, 00:07:36.216 "get_zone_info": false, 00:07:36.216 "zone_management": false, 00:07:36.216 "zone_append": false, 00:07:36.216 "compare": false, 00:07:36.216 "compare_and_write": false, 00:07:36.216 "abort": true, 00:07:36.216 "seek_hole": false, 00:07:36.216 "seek_data": false, 00:07:36.216 "copy": true, 00:07:36.216 "nvme_iov_md": false 00:07:36.216 }, 00:07:36.216 "memory_domains": [ 00:07:36.216 { 00:07:36.216 "dma_device_id": "system", 00:07:36.216 "dma_device_type": 1 00:07:36.216 }, 00:07:36.216 { 00:07:36.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.216 "dma_device_type": 2 00:07:36.216 } 00:07:36.216 ], 00:07:36.216 "driver_specific": {} 00:07:36.216 } 00:07:36.216 ] 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.216 "name": "Existed_Raid", 00:07:36.216 "uuid": "63c6595e-6869-4b8a-8197-03ffa69de0da", 00:07:36.216 "strip_size_kb": 64, 00:07:36.216 "state": "online", 00:07:36.216 "raid_level": "concat", 00:07:36.216 "superblock": false, 00:07:36.216 "num_base_bdevs": 2, 00:07:36.216 "num_base_bdevs_discovered": 2, 00:07:36.216 "num_base_bdevs_operational": 2, 00:07:36.216 "base_bdevs_list": [ 00:07:36.216 { 00:07:36.216 "name": "BaseBdev1", 00:07:36.216 "uuid": "df9923d5-45e9-475e-a39d-fd1b20ffeb40", 00:07:36.216 "is_configured": true, 00:07:36.216 "data_offset": 0, 00:07:36.216 "data_size": 65536 00:07:36.216 }, 00:07:36.216 { 00:07:36.216 "name": "BaseBdev2", 00:07:36.216 "uuid": "728d699c-3c97-4d56-8949-80d41c2225d4", 00:07:36.216 "is_configured": true, 00:07:36.216 "data_offset": 0, 00:07:36.216 "data_size": 65536 00:07:36.216 } 00:07:36.216 ] 00:07:36.216 }' 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.216 16:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.478 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:36.478 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:36.478 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:36.478 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:36.478 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:36.478 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:36.478 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:36.478 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.478 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:36.478 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.478 [2024-12-07 16:33:35.363325] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:36.739 "name": "Existed_Raid", 00:07:36.739 "aliases": [ 00:07:36.739 "63c6595e-6869-4b8a-8197-03ffa69de0da" 00:07:36.739 ], 00:07:36.739 "product_name": "Raid Volume", 00:07:36.739 "block_size": 512, 00:07:36.739 "num_blocks": 131072, 00:07:36.739 "uuid": "63c6595e-6869-4b8a-8197-03ffa69de0da", 00:07:36.739 "assigned_rate_limits": { 00:07:36.739 "rw_ios_per_sec": 0, 00:07:36.739 "rw_mbytes_per_sec": 0, 00:07:36.739 "r_mbytes_per_sec": 0, 00:07:36.739 "w_mbytes_per_sec": 0 00:07:36.739 }, 00:07:36.739 "claimed": false, 00:07:36.739 "zoned": false, 00:07:36.739 "supported_io_types": { 00:07:36.739 "read": true, 00:07:36.739 "write": true, 00:07:36.739 "unmap": true, 00:07:36.739 "flush": true, 00:07:36.739 "reset": true, 00:07:36.739 "nvme_admin": false, 00:07:36.739 "nvme_io": false, 00:07:36.739 "nvme_io_md": false, 00:07:36.739 "write_zeroes": true, 00:07:36.739 "zcopy": false, 00:07:36.739 "get_zone_info": false, 00:07:36.739 "zone_management": false, 00:07:36.739 "zone_append": false, 00:07:36.739 "compare": false, 00:07:36.739 "compare_and_write": false, 00:07:36.739 "abort": false, 00:07:36.739 "seek_hole": false, 00:07:36.739 "seek_data": false, 00:07:36.739 "copy": false, 00:07:36.739 "nvme_iov_md": false 00:07:36.739 }, 00:07:36.739 "memory_domains": [ 00:07:36.739 { 00:07:36.739 "dma_device_id": "system", 00:07:36.739 "dma_device_type": 1 00:07:36.739 }, 00:07:36.739 { 00:07:36.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.739 "dma_device_type": 2 00:07:36.739 }, 00:07:36.739 { 00:07:36.739 "dma_device_id": "system", 00:07:36.739 "dma_device_type": 1 00:07:36.739 }, 00:07:36.739 { 00:07:36.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.739 "dma_device_type": 2 00:07:36.739 } 00:07:36.739 ], 00:07:36.739 "driver_specific": { 00:07:36.739 "raid": { 00:07:36.739 "uuid": "63c6595e-6869-4b8a-8197-03ffa69de0da", 00:07:36.739 "strip_size_kb": 64, 00:07:36.739 "state": "online", 00:07:36.739 "raid_level": "concat", 00:07:36.739 "superblock": false, 00:07:36.739 "num_base_bdevs": 2, 00:07:36.739 "num_base_bdevs_discovered": 2, 00:07:36.739 "num_base_bdevs_operational": 2, 00:07:36.739 "base_bdevs_list": [ 00:07:36.739 { 00:07:36.739 "name": "BaseBdev1", 00:07:36.739 "uuid": "df9923d5-45e9-475e-a39d-fd1b20ffeb40", 00:07:36.739 "is_configured": true, 00:07:36.739 "data_offset": 0, 00:07:36.739 "data_size": 65536 00:07:36.739 }, 00:07:36.739 { 00:07:36.739 "name": "BaseBdev2", 00:07:36.739 "uuid": "728d699c-3c97-4d56-8949-80d41c2225d4", 00:07:36.739 "is_configured": true, 00:07:36.739 "data_offset": 0, 00:07:36.739 "data_size": 65536 00:07:36.739 } 00:07:36.739 ] 00:07:36.739 } 00:07:36.739 } 00:07:36.739 }' 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:36.739 BaseBdev2' 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.739 [2024-12-07 16:33:35.538899] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:36.739 [2024-12-07 16:33:35.539031] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:36.739 [2024-12-07 16:33:35.539112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.739 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.739 "name": "Existed_Raid", 00:07:36.739 "uuid": "63c6595e-6869-4b8a-8197-03ffa69de0da", 00:07:36.739 "strip_size_kb": 64, 00:07:36.739 "state": "offline", 00:07:36.739 "raid_level": "concat", 00:07:36.739 "superblock": false, 00:07:36.739 "num_base_bdevs": 2, 00:07:36.739 "num_base_bdevs_discovered": 1, 00:07:36.739 "num_base_bdevs_operational": 1, 00:07:36.740 "base_bdevs_list": [ 00:07:36.740 { 00:07:36.740 "name": null, 00:07:36.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.740 "is_configured": false, 00:07:36.740 "data_offset": 0, 00:07:36.740 "data_size": 65536 00:07:36.740 }, 00:07:36.740 { 00:07:36.740 "name": "BaseBdev2", 00:07:36.740 "uuid": "728d699c-3c97-4d56-8949-80d41c2225d4", 00:07:36.740 "is_configured": true, 00:07:36.740 "data_offset": 0, 00:07:36.740 "data_size": 65536 00:07:36.740 } 00:07:36.740 ] 00:07:36.740 }' 00:07:36.740 16:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.740 16:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.307 [2024-12-07 16:33:36.107237] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:37.307 [2024-12-07 16:33:36.107412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73281 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73281 ']' 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73281 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.307 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73281 00:07:37.567 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.567 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.567 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73281' 00:07:37.567 killing process with pid 73281 00:07:37.567 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73281 00:07:37.567 [2024-12-07 16:33:36.227200] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.567 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73281 00:07:37.567 [2024-12-07 16:33:36.228849] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:37.828 00:07:37.828 real 0m3.979s 00:07:37.828 user 0m6.065s 00:07:37.828 sys 0m0.832s 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.828 ************************************ 00:07:37.828 END TEST raid_state_function_test 00:07:37.828 ************************************ 00:07:37.828 16:33:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:37.828 16:33:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:37.828 16:33:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.828 16:33:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.828 ************************************ 00:07:37.828 START TEST raid_state_function_test_sb 00:07:37.828 ************************************ 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73523 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73523' 00:07:37.828 Process raid pid: 73523 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73523 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73523 ']' 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.828 16:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.088 [2024-12-07 16:33:36.770434] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:38.088 [2024-12-07 16:33:36.770700] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.088 [2024-12-07 16:33:36.933561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.347 [2024-12-07 16:33:37.003055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.347 [2024-12-07 16:33:37.080318] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.347 [2024-12-07 16:33:37.080476] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.915 16:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.915 16:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:38.915 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.915 16:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.915 16:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.915 [2024-12-07 16:33:37.604081] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.915 [2024-12-07 16:33:37.604141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.915 [2024-12-07 16:33:37.604155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.915 [2024-12-07 16:33:37.604165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.916 "name": "Existed_Raid", 00:07:38.916 "uuid": "80dd0f3f-2bad-464d-949b-e507c91ecd33", 00:07:38.916 "strip_size_kb": 64, 00:07:38.916 "state": "configuring", 00:07:38.916 "raid_level": "concat", 00:07:38.916 "superblock": true, 00:07:38.916 "num_base_bdevs": 2, 00:07:38.916 "num_base_bdevs_discovered": 0, 00:07:38.916 "num_base_bdevs_operational": 2, 00:07:38.916 "base_bdevs_list": [ 00:07:38.916 { 00:07:38.916 "name": "BaseBdev1", 00:07:38.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.916 "is_configured": false, 00:07:38.916 "data_offset": 0, 00:07:38.916 "data_size": 0 00:07:38.916 }, 00:07:38.916 { 00:07:38.916 "name": "BaseBdev2", 00:07:38.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.916 "is_configured": false, 00:07:38.916 "data_offset": 0, 00:07:38.916 "data_size": 0 00:07:38.916 } 00:07:38.916 ] 00:07:38.916 }' 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.916 16:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.175 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:39.175 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.175 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.435 [2024-12-07 16:33:38.079165] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:39.435 [2024-12-07 16:33:38.079289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.435 [2024-12-07 16:33:38.091203] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:39.435 [2024-12-07 16:33:38.091284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:39.435 [2024-12-07 16:33:38.091310] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:39.435 [2024-12-07 16:33:38.091332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.435 [2024-12-07 16:33:38.118650] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.435 BaseBdev1 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.435 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.435 [ 00:07:39.435 { 00:07:39.435 "name": "BaseBdev1", 00:07:39.435 "aliases": [ 00:07:39.435 "4bed0ffc-2e43-4204-82dd-efa0be15bf3b" 00:07:39.435 ], 00:07:39.435 "product_name": "Malloc disk", 00:07:39.435 "block_size": 512, 00:07:39.435 "num_blocks": 65536, 00:07:39.435 "uuid": "4bed0ffc-2e43-4204-82dd-efa0be15bf3b", 00:07:39.435 "assigned_rate_limits": { 00:07:39.435 "rw_ios_per_sec": 0, 00:07:39.435 "rw_mbytes_per_sec": 0, 00:07:39.435 "r_mbytes_per_sec": 0, 00:07:39.435 "w_mbytes_per_sec": 0 00:07:39.435 }, 00:07:39.435 "claimed": true, 00:07:39.435 "claim_type": "exclusive_write", 00:07:39.435 "zoned": false, 00:07:39.435 "supported_io_types": { 00:07:39.435 "read": true, 00:07:39.435 "write": true, 00:07:39.435 "unmap": true, 00:07:39.435 "flush": true, 00:07:39.435 "reset": true, 00:07:39.435 "nvme_admin": false, 00:07:39.435 "nvme_io": false, 00:07:39.435 "nvme_io_md": false, 00:07:39.435 "write_zeroes": true, 00:07:39.435 "zcopy": true, 00:07:39.436 "get_zone_info": false, 00:07:39.436 "zone_management": false, 00:07:39.436 "zone_append": false, 00:07:39.436 "compare": false, 00:07:39.436 "compare_and_write": false, 00:07:39.436 "abort": true, 00:07:39.436 "seek_hole": false, 00:07:39.436 "seek_data": false, 00:07:39.436 "copy": true, 00:07:39.436 "nvme_iov_md": false 00:07:39.436 }, 00:07:39.436 "memory_domains": [ 00:07:39.436 { 00:07:39.436 "dma_device_id": "system", 00:07:39.436 "dma_device_type": 1 00:07:39.436 }, 00:07:39.436 { 00:07:39.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.436 "dma_device_type": 2 00:07:39.436 } 00:07:39.436 ], 00:07:39.436 "driver_specific": {} 00:07:39.436 } 00:07:39.436 ] 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.436 "name": "Existed_Raid", 00:07:39.436 "uuid": "6a946d3e-1e4e-4575-8857-360cbbd9d330", 00:07:39.436 "strip_size_kb": 64, 00:07:39.436 "state": "configuring", 00:07:39.436 "raid_level": "concat", 00:07:39.436 "superblock": true, 00:07:39.436 "num_base_bdevs": 2, 00:07:39.436 "num_base_bdevs_discovered": 1, 00:07:39.436 "num_base_bdevs_operational": 2, 00:07:39.436 "base_bdevs_list": [ 00:07:39.436 { 00:07:39.436 "name": "BaseBdev1", 00:07:39.436 "uuid": "4bed0ffc-2e43-4204-82dd-efa0be15bf3b", 00:07:39.436 "is_configured": true, 00:07:39.436 "data_offset": 2048, 00:07:39.436 "data_size": 63488 00:07:39.436 }, 00:07:39.436 { 00:07:39.436 "name": "BaseBdev2", 00:07:39.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.436 "is_configured": false, 00:07:39.436 "data_offset": 0, 00:07:39.436 "data_size": 0 00:07:39.436 } 00:07:39.436 ] 00:07:39.436 }' 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.436 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.004 [2024-12-07 16:33:38.609815] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:40.004 [2024-12-07 16:33:38.609861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.004 [2024-12-07 16:33:38.617852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.004 [2024-12-07 16:33:38.620049] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:40.004 [2024-12-07 16:33:38.620090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.004 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.004 "name": "Existed_Raid", 00:07:40.004 "uuid": "ef820112-be87-4deb-a416-c888f5178dd0", 00:07:40.004 "strip_size_kb": 64, 00:07:40.004 "state": "configuring", 00:07:40.004 "raid_level": "concat", 00:07:40.004 "superblock": true, 00:07:40.004 "num_base_bdevs": 2, 00:07:40.004 "num_base_bdevs_discovered": 1, 00:07:40.004 "num_base_bdevs_operational": 2, 00:07:40.004 "base_bdevs_list": [ 00:07:40.004 { 00:07:40.004 "name": "BaseBdev1", 00:07:40.004 "uuid": "4bed0ffc-2e43-4204-82dd-efa0be15bf3b", 00:07:40.004 "is_configured": true, 00:07:40.005 "data_offset": 2048, 00:07:40.005 "data_size": 63488 00:07:40.005 }, 00:07:40.005 { 00:07:40.005 "name": "BaseBdev2", 00:07:40.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.005 "is_configured": false, 00:07:40.005 "data_offset": 0, 00:07:40.005 "data_size": 0 00:07:40.005 } 00:07:40.005 ] 00:07:40.005 }' 00:07:40.005 16:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.005 16:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.265 [2024-12-07 16:33:39.065121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:40.265 [2024-12-07 16:33:39.065897] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:40.265 [2024-12-07 16:33:39.066057] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:40.265 BaseBdev2 00:07:40.265 [2024-12-07 16:33:39.067041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:40.265 [2024-12-07 16:33:39.067670] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:40.265 [2024-12-07 16:33:39.067745] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:40.265 [2024-12-07 16:33:39.068344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.265 [ 00:07:40.265 { 00:07:40.265 "name": "BaseBdev2", 00:07:40.265 "aliases": [ 00:07:40.265 "1c453d57-ac9e-4993-9c95-9b92e7f2e4f4" 00:07:40.265 ], 00:07:40.265 "product_name": "Malloc disk", 00:07:40.265 "block_size": 512, 00:07:40.265 "num_blocks": 65536, 00:07:40.265 "uuid": "1c453d57-ac9e-4993-9c95-9b92e7f2e4f4", 00:07:40.265 "assigned_rate_limits": { 00:07:40.265 "rw_ios_per_sec": 0, 00:07:40.265 "rw_mbytes_per_sec": 0, 00:07:40.265 "r_mbytes_per_sec": 0, 00:07:40.265 "w_mbytes_per_sec": 0 00:07:40.265 }, 00:07:40.265 "claimed": true, 00:07:40.265 "claim_type": "exclusive_write", 00:07:40.265 "zoned": false, 00:07:40.265 "supported_io_types": { 00:07:40.265 "read": true, 00:07:40.265 "write": true, 00:07:40.265 "unmap": true, 00:07:40.265 "flush": true, 00:07:40.265 "reset": true, 00:07:40.265 "nvme_admin": false, 00:07:40.265 "nvme_io": false, 00:07:40.265 "nvme_io_md": false, 00:07:40.265 "write_zeroes": true, 00:07:40.265 "zcopy": true, 00:07:40.265 "get_zone_info": false, 00:07:40.265 "zone_management": false, 00:07:40.265 "zone_append": false, 00:07:40.265 "compare": false, 00:07:40.265 "compare_and_write": false, 00:07:40.265 "abort": true, 00:07:40.265 "seek_hole": false, 00:07:40.265 "seek_data": false, 00:07:40.265 "copy": true, 00:07:40.265 "nvme_iov_md": false 00:07:40.265 }, 00:07:40.265 "memory_domains": [ 00:07:40.265 { 00:07:40.265 "dma_device_id": "system", 00:07:40.265 "dma_device_type": 1 00:07:40.265 }, 00:07:40.265 { 00:07:40.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.265 "dma_device_type": 2 00:07:40.265 } 00:07:40.265 ], 00:07:40.265 "driver_specific": {} 00:07:40.265 } 00:07:40.265 ] 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.265 "name": "Existed_Raid", 00:07:40.265 "uuid": "ef820112-be87-4deb-a416-c888f5178dd0", 00:07:40.265 "strip_size_kb": 64, 00:07:40.265 "state": "online", 00:07:40.265 "raid_level": "concat", 00:07:40.265 "superblock": true, 00:07:40.265 "num_base_bdevs": 2, 00:07:40.265 "num_base_bdevs_discovered": 2, 00:07:40.265 "num_base_bdevs_operational": 2, 00:07:40.265 "base_bdevs_list": [ 00:07:40.265 { 00:07:40.265 "name": "BaseBdev1", 00:07:40.265 "uuid": "4bed0ffc-2e43-4204-82dd-efa0be15bf3b", 00:07:40.265 "is_configured": true, 00:07:40.265 "data_offset": 2048, 00:07:40.265 "data_size": 63488 00:07:40.265 }, 00:07:40.265 { 00:07:40.265 "name": "BaseBdev2", 00:07:40.265 "uuid": "1c453d57-ac9e-4993-9c95-9b92e7f2e4f4", 00:07:40.265 "is_configured": true, 00:07:40.265 "data_offset": 2048, 00:07:40.265 "data_size": 63488 00:07:40.265 } 00:07:40.265 ] 00:07:40.265 }' 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.265 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.836 [2024-12-07 16:33:39.492770] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:40.836 "name": "Existed_Raid", 00:07:40.836 "aliases": [ 00:07:40.836 "ef820112-be87-4deb-a416-c888f5178dd0" 00:07:40.836 ], 00:07:40.836 "product_name": "Raid Volume", 00:07:40.836 "block_size": 512, 00:07:40.836 "num_blocks": 126976, 00:07:40.836 "uuid": "ef820112-be87-4deb-a416-c888f5178dd0", 00:07:40.836 "assigned_rate_limits": { 00:07:40.836 "rw_ios_per_sec": 0, 00:07:40.836 "rw_mbytes_per_sec": 0, 00:07:40.836 "r_mbytes_per_sec": 0, 00:07:40.836 "w_mbytes_per_sec": 0 00:07:40.836 }, 00:07:40.836 "claimed": false, 00:07:40.836 "zoned": false, 00:07:40.836 "supported_io_types": { 00:07:40.836 "read": true, 00:07:40.836 "write": true, 00:07:40.836 "unmap": true, 00:07:40.836 "flush": true, 00:07:40.836 "reset": true, 00:07:40.836 "nvme_admin": false, 00:07:40.836 "nvme_io": false, 00:07:40.836 "nvme_io_md": false, 00:07:40.836 "write_zeroes": true, 00:07:40.836 "zcopy": false, 00:07:40.836 "get_zone_info": false, 00:07:40.836 "zone_management": false, 00:07:40.836 "zone_append": false, 00:07:40.836 "compare": false, 00:07:40.836 "compare_and_write": false, 00:07:40.836 "abort": false, 00:07:40.836 "seek_hole": false, 00:07:40.836 "seek_data": false, 00:07:40.836 "copy": false, 00:07:40.836 "nvme_iov_md": false 00:07:40.836 }, 00:07:40.836 "memory_domains": [ 00:07:40.836 { 00:07:40.836 "dma_device_id": "system", 00:07:40.836 "dma_device_type": 1 00:07:40.836 }, 00:07:40.836 { 00:07:40.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.836 "dma_device_type": 2 00:07:40.836 }, 00:07:40.836 { 00:07:40.836 "dma_device_id": "system", 00:07:40.836 "dma_device_type": 1 00:07:40.836 }, 00:07:40.836 { 00:07:40.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.836 "dma_device_type": 2 00:07:40.836 } 00:07:40.836 ], 00:07:40.836 "driver_specific": { 00:07:40.836 "raid": { 00:07:40.836 "uuid": "ef820112-be87-4deb-a416-c888f5178dd0", 00:07:40.836 "strip_size_kb": 64, 00:07:40.836 "state": "online", 00:07:40.836 "raid_level": "concat", 00:07:40.836 "superblock": true, 00:07:40.836 "num_base_bdevs": 2, 00:07:40.836 "num_base_bdevs_discovered": 2, 00:07:40.836 "num_base_bdevs_operational": 2, 00:07:40.836 "base_bdevs_list": [ 00:07:40.836 { 00:07:40.836 "name": "BaseBdev1", 00:07:40.836 "uuid": "4bed0ffc-2e43-4204-82dd-efa0be15bf3b", 00:07:40.836 "is_configured": true, 00:07:40.836 "data_offset": 2048, 00:07:40.836 "data_size": 63488 00:07:40.836 }, 00:07:40.836 { 00:07:40.836 "name": "BaseBdev2", 00:07:40.836 "uuid": "1c453d57-ac9e-4993-9c95-9b92e7f2e4f4", 00:07:40.836 "is_configured": true, 00:07:40.836 "data_offset": 2048, 00:07:40.836 "data_size": 63488 00:07:40.836 } 00:07:40.836 ] 00:07:40.836 } 00:07:40.836 } 00:07:40.836 }' 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:40.836 BaseBdev2' 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.836 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.836 [2024-12-07 16:33:39.716138] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:40.836 [2024-12-07 16:33:39.716190] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.836 [2024-12-07 16:33:39.716269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.097 "name": "Existed_Raid", 00:07:41.097 "uuid": "ef820112-be87-4deb-a416-c888f5178dd0", 00:07:41.097 "strip_size_kb": 64, 00:07:41.097 "state": "offline", 00:07:41.097 "raid_level": "concat", 00:07:41.097 "superblock": true, 00:07:41.097 "num_base_bdevs": 2, 00:07:41.097 "num_base_bdevs_discovered": 1, 00:07:41.097 "num_base_bdevs_operational": 1, 00:07:41.097 "base_bdevs_list": [ 00:07:41.097 { 00:07:41.097 "name": null, 00:07:41.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.097 "is_configured": false, 00:07:41.097 "data_offset": 0, 00:07:41.097 "data_size": 63488 00:07:41.097 }, 00:07:41.097 { 00:07:41.097 "name": "BaseBdev2", 00:07:41.097 "uuid": "1c453d57-ac9e-4993-9c95-9b92e7f2e4f4", 00:07:41.097 "is_configured": true, 00:07:41.097 "data_offset": 2048, 00:07:41.097 "data_size": 63488 00:07:41.097 } 00:07:41.097 ] 00:07:41.097 }' 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.097 16:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.357 [2024-12-07 16:33:40.224662] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:41.357 [2024-12-07 16:33:40.224749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.357 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.617 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73523 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73523 ']' 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73523 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73523 00:07:41.618 killing process with pid 73523 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73523' 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73523 00:07:41.618 [2024-12-07 16:33:40.338894] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.618 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73523 00:07:41.618 [2024-12-07 16:33:40.340558] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.877 16:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:41.877 00:07:41.877 real 0m4.051s 00:07:41.877 user 0m6.139s 00:07:41.877 sys 0m0.895s 00:07:41.877 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.877 16:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.877 ************************************ 00:07:41.877 END TEST raid_state_function_test_sb 00:07:41.877 ************************************ 00:07:42.138 16:33:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:42.138 16:33:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:42.138 16:33:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.138 16:33:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.138 ************************************ 00:07:42.138 START TEST raid_superblock_test 00:07:42.138 ************************************ 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73764 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73764 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73764 ']' 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.138 16:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.138 [2024-12-07 16:33:40.887693] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:42.138 [2024-12-07 16:33:40.888275] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73764 ] 00:07:42.441 [2024-12-07 16:33:41.046881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.441 [2024-12-07 16:33:41.129604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.441 [2024-12-07 16:33:41.208792] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.441 [2024-12-07 16:33:41.208948] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.013 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.013 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:43.013 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:43.013 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:43.013 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:43.013 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:43.013 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:43.013 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:43.013 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:43.013 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:43.013 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:43.013 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.013 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.013 malloc1 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.014 [2024-12-07 16:33:41.775076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:43.014 [2024-12-07 16:33:41.775241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.014 [2024-12-07 16:33:41.775290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:43.014 [2024-12-07 16:33:41.775331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.014 [2024-12-07 16:33:41.777965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.014 [2024-12-07 16:33:41.778044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:43.014 pt1 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.014 malloc2 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.014 [2024-12-07 16:33:41.821185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:43.014 [2024-12-07 16:33:41.821267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.014 [2024-12-07 16:33:41.821291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:43.014 [2024-12-07 16:33:41.821304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.014 [2024-12-07 16:33:41.823886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.014 [2024-12-07 16:33:41.823924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:43.014 pt2 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.014 [2024-12-07 16:33:41.833250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:43.014 [2024-12-07 16:33:41.835518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:43.014 [2024-12-07 16:33:41.835688] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:43.014 [2024-12-07 16:33:41.835703] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:43.014 [2024-12-07 16:33:41.836025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:43.014 [2024-12-07 16:33:41.836186] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:43.014 [2024-12-07 16:33:41.836197] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:43.014 [2024-12-07 16:33:41.836391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.014 "name": "raid_bdev1", 00:07:43.014 "uuid": "e7665c55-4446-4cb6-a76a-f9e445efc20a", 00:07:43.014 "strip_size_kb": 64, 00:07:43.014 "state": "online", 00:07:43.014 "raid_level": "concat", 00:07:43.014 "superblock": true, 00:07:43.014 "num_base_bdevs": 2, 00:07:43.014 "num_base_bdevs_discovered": 2, 00:07:43.014 "num_base_bdevs_operational": 2, 00:07:43.014 "base_bdevs_list": [ 00:07:43.014 { 00:07:43.014 "name": "pt1", 00:07:43.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:43.014 "is_configured": true, 00:07:43.014 "data_offset": 2048, 00:07:43.014 "data_size": 63488 00:07:43.014 }, 00:07:43.014 { 00:07:43.014 "name": "pt2", 00:07:43.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:43.014 "is_configured": true, 00:07:43.014 "data_offset": 2048, 00:07:43.014 "data_size": 63488 00:07:43.014 } 00:07:43.014 ] 00:07:43.014 }' 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.014 16:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.584 [2024-12-07 16:33:42.312819] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.584 "name": "raid_bdev1", 00:07:43.584 "aliases": [ 00:07:43.584 "e7665c55-4446-4cb6-a76a-f9e445efc20a" 00:07:43.584 ], 00:07:43.584 "product_name": "Raid Volume", 00:07:43.584 "block_size": 512, 00:07:43.584 "num_blocks": 126976, 00:07:43.584 "uuid": "e7665c55-4446-4cb6-a76a-f9e445efc20a", 00:07:43.584 "assigned_rate_limits": { 00:07:43.584 "rw_ios_per_sec": 0, 00:07:43.584 "rw_mbytes_per_sec": 0, 00:07:43.584 "r_mbytes_per_sec": 0, 00:07:43.584 "w_mbytes_per_sec": 0 00:07:43.584 }, 00:07:43.584 "claimed": false, 00:07:43.584 "zoned": false, 00:07:43.584 "supported_io_types": { 00:07:43.584 "read": true, 00:07:43.584 "write": true, 00:07:43.584 "unmap": true, 00:07:43.584 "flush": true, 00:07:43.584 "reset": true, 00:07:43.584 "nvme_admin": false, 00:07:43.584 "nvme_io": false, 00:07:43.584 "nvme_io_md": false, 00:07:43.584 "write_zeroes": true, 00:07:43.584 "zcopy": false, 00:07:43.584 "get_zone_info": false, 00:07:43.584 "zone_management": false, 00:07:43.584 "zone_append": false, 00:07:43.584 "compare": false, 00:07:43.584 "compare_and_write": false, 00:07:43.584 "abort": false, 00:07:43.584 "seek_hole": false, 00:07:43.584 "seek_data": false, 00:07:43.584 "copy": false, 00:07:43.584 "nvme_iov_md": false 00:07:43.584 }, 00:07:43.584 "memory_domains": [ 00:07:43.584 { 00:07:43.584 "dma_device_id": "system", 00:07:43.584 "dma_device_type": 1 00:07:43.584 }, 00:07:43.584 { 00:07:43.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.584 "dma_device_type": 2 00:07:43.584 }, 00:07:43.584 { 00:07:43.584 "dma_device_id": "system", 00:07:43.584 "dma_device_type": 1 00:07:43.584 }, 00:07:43.584 { 00:07:43.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.584 "dma_device_type": 2 00:07:43.584 } 00:07:43.584 ], 00:07:43.584 "driver_specific": { 00:07:43.584 "raid": { 00:07:43.584 "uuid": "e7665c55-4446-4cb6-a76a-f9e445efc20a", 00:07:43.584 "strip_size_kb": 64, 00:07:43.584 "state": "online", 00:07:43.584 "raid_level": "concat", 00:07:43.584 "superblock": true, 00:07:43.584 "num_base_bdevs": 2, 00:07:43.584 "num_base_bdevs_discovered": 2, 00:07:43.584 "num_base_bdevs_operational": 2, 00:07:43.584 "base_bdevs_list": [ 00:07:43.584 { 00:07:43.584 "name": "pt1", 00:07:43.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:43.584 "is_configured": true, 00:07:43.584 "data_offset": 2048, 00:07:43.584 "data_size": 63488 00:07:43.584 }, 00:07:43.584 { 00:07:43.584 "name": "pt2", 00:07:43.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:43.584 "is_configured": true, 00:07:43.584 "data_offset": 2048, 00:07:43.584 "data_size": 63488 00:07:43.584 } 00:07:43.584 ] 00:07:43.584 } 00:07:43.584 } 00:07:43.584 }' 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:43.584 pt2' 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.584 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.845 [2024-12-07 16:33:42.564273] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e7665c55-4446-4cb6-a76a-f9e445efc20a 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e7665c55-4446-4cb6-a76a-f9e445efc20a ']' 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.845 [2024-12-07 16:33:42.587932] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.845 [2024-12-07 16:33:42.588051] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.845 [2024-12-07 16:33:42.588202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.845 [2024-12-07 16:33:42.588294] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.845 [2024-12-07 16:33:42.588363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.845 [2024-12-07 16:33:42.711817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:43.845 [2024-12-07 16:33:42.714108] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:43.845 [2024-12-07 16:33:42.714196] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:43.845 [2024-12-07 16:33:42.714257] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:43.845 [2024-12-07 16:33:42.714275] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.845 [2024-12-07 16:33:42.714286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:43.845 request: 00:07:43.845 { 00:07:43.845 "name": "raid_bdev1", 00:07:43.845 "raid_level": "concat", 00:07:43.845 "base_bdevs": [ 00:07:43.845 "malloc1", 00:07:43.845 "malloc2" 00:07:43.845 ], 00:07:43.845 "strip_size_kb": 64, 00:07:43.845 "superblock": false, 00:07:43.845 "method": "bdev_raid_create", 00:07:43.845 "req_id": 1 00:07:43.845 } 00:07:43.845 Got JSON-RPC error response 00:07:43.845 response: 00:07:43.845 { 00:07:43.845 "code": -17, 00:07:43.845 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:43.845 } 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.845 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.105 [2024-12-07 16:33:42.775635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:44.105 [2024-12-07 16:33:42.775736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.105 [2024-12-07 16:33:42.775765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:44.105 [2024-12-07 16:33:42.775776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.105 [2024-12-07 16:33:42.778382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.105 [2024-12-07 16:33:42.778417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:44.105 [2024-12-07 16:33:42.778522] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:44.105 [2024-12-07 16:33:42.778582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:44.105 pt1 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.105 "name": "raid_bdev1", 00:07:44.105 "uuid": "e7665c55-4446-4cb6-a76a-f9e445efc20a", 00:07:44.105 "strip_size_kb": 64, 00:07:44.105 "state": "configuring", 00:07:44.105 "raid_level": "concat", 00:07:44.105 "superblock": true, 00:07:44.105 "num_base_bdevs": 2, 00:07:44.105 "num_base_bdevs_discovered": 1, 00:07:44.105 "num_base_bdevs_operational": 2, 00:07:44.105 "base_bdevs_list": [ 00:07:44.105 { 00:07:44.105 "name": "pt1", 00:07:44.105 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.105 "is_configured": true, 00:07:44.105 "data_offset": 2048, 00:07:44.105 "data_size": 63488 00:07:44.105 }, 00:07:44.105 { 00:07:44.105 "name": null, 00:07:44.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.105 "is_configured": false, 00:07:44.105 "data_offset": 2048, 00:07:44.105 "data_size": 63488 00:07:44.105 } 00:07:44.105 ] 00:07:44.105 }' 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.105 16:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.365 [2024-12-07 16:33:43.222914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:44.365 [2024-12-07 16:33:43.223118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.365 [2024-12-07 16:33:43.223168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:44.365 [2024-12-07 16:33:43.223198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.365 [2024-12-07 16:33:43.223763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.365 [2024-12-07 16:33:43.223827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:44.365 [2024-12-07 16:33:43.223958] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:44.365 [2024-12-07 16:33:43.224011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:44.365 [2024-12-07 16:33:43.224157] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:44.365 [2024-12-07 16:33:43.224196] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:44.365 [2024-12-07 16:33:43.224490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:44.365 [2024-12-07 16:33:43.224648] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:44.365 [2024-12-07 16:33:43.224695] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:44.365 [2024-12-07 16:33:43.224849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.365 pt2 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.365 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.625 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.625 "name": "raid_bdev1", 00:07:44.625 "uuid": "e7665c55-4446-4cb6-a76a-f9e445efc20a", 00:07:44.625 "strip_size_kb": 64, 00:07:44.625 "state": "online", 00:07:44.625 "raid_level": "concat", 00:07:44.625 "superblock": true, 00:07:44.625 "num_base_bdevs": 2, 00:07:44.625 "num_base_bdevs_discovered": 2, 00:07:44.625 "num_base_bdevs_operational": 2, 00:07:44.625 "base_bdevs_list": [ 00:07:44.625 { 00:07:44.625 "name": "pt1", 00:07:44.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.625 "is_configured": true, 00:07:44.625 "data_offset": 2048, 00:07:44.625 "data_size": 63488 00:07:44.625 }, 00:07:44.625 { 00:07:44.625 "name": "pt2", 00:07:44.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.625 "is_configured": true, 00:07:44.625 "data_offset": 2048, 00:07:44.625 "data_size": 63488 00:07:44.625 } 00:07:44.625 ] 00:07:44.625 }' 00:07:44.625 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.625 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.886 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:44.886 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:44.886 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.886 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.886 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.886 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.886 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.886 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.886 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.886 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.886 [2024-12-07 16:33:43.686407] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.886 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.886 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.886 "name": "raid_bdev1", 00:07:44.886 "aliases": [ 00:07:44.886 "e7665c55-4446-4cb6-a76a-f9e445efc20a" 00:07:44.886 ], 00:07:44.886 "product_name": "Raid Volume", 00:07:44.886 "block_size": 512, 00:07:44.886 "num_blocks": 126976, 00:07:44.886 "uuid": "e7665c55-4446-4cb6-a76a-f9e445efc20a", 00:07:44.886 "assigned_rate_limits": { 00:07:44.886 "rw_ios_per_sec": 0, 00:07:44.886 "rw_mbytes_per_sec": 0, 00:07:44.886 "r_mbytes_per_sec": 0, 00:07:44.886 "w_mbytes_per_sec": 0 00:07:44.886 }, 00:07:44.886 "claimed": false, 00:07:44.886 "zoned": false, 00:07:44.886 "supported_io_types": { 00:07:44.886 "read": true, 00:07:44.886 "write": true, 00:07:44.886 "unmap": true, 00:07:44.886 "flush": true, 00:07:44.886 "reset": true, 00:07:44.886 "nvme_admin": false, 00:07:44.886 "nvme_io": false, 00:07:44.886 "nvme_io_md": false, 00:07:44.886 "write_zeroes": true, 00:07:44.886 "zcopy": false, 00:07:44.886 "get_zone_info": false, 00:07:44.886 "zone_management": false, 00:07:44.886 "zone_append": false, 00:07:44.886 "compare": false, 00:07:44.886 "compare_and_write": false, 00:07:44.886 "abort": false, 00:07:44.886 "seek_hole": false, 00:07:44.886 "seek_data": false, 00:07:44.886 "copy": false, 00:07:44.886 "nvme_iov_md": false 00:07:44.886 }, 00:07:44.886 "memory_domains": [ 00:07:44.886 { 00:07:44.886 "dma_device_id": "system", 00:07:44.886 "dma_device_type": 1 00:07:44.886 }, 00:07:44.886 { 00:07:44.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.886 "dma_device_type": 2 00:07:44.886 }, 00:07:44.886 { 00:07:44.886 "dma_device_id": "system", 00:07:44.886 "dma_device_type": 1 00:07:44.886 }, 00:07:44.886 { 00:07:44.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.886 "dma_device_type": 2 00:07:44.886 } 00:07:44.886 ], 00:07:44.886 "driver_specific": { 00:07:44.886 "raid": { 00:07:44.886 "uuid": "e7665c55-4446-4cb6-a76a-f9e445efc20a", 00:07:44.886 "strip_size_kb": 64, 00:07:44.886 "state": "online", 00:07:44.886 "raid_level": "concat", 00:07:44.886 "superblock": true, 00:07:44.886 "num_base_bdevs": 2, 00:07:44.886 "num_base_bdevs_discovered": 2, 00:07:44.886 "num_base_bdevs_operational": 2, 00:07:44.886 "base_bdevs_list": [ 00:07:44.886 { 00:07:44.886 "name": "pt1", 00:07:44.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.886 "is_configured": true, 00:07:44.886 "data_offset": 2048, 00:07:44.886 "data_size": 63488 00:07:44.886 }, 00:07:44.886 { 00:07:44.886 "name": "pt2", 00:07:44.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.886 "is_configured": true, 00:07:44.886 "data_offset": 2048, 00:07:44.886 "data_size": 63488 00:07:44.886 } 00:07:44.886 ] 00:07:44.886 } 00:07:44.887 } 00:07:44.887 }' 00:07:44.887 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.887 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:44.887 pt2' 00:07:44.887 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:45.147 [2024-12-07 16:33:43.898014] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e7665c55-4446-4cb6-a76a-f9e445efc20a '!=' e7665c55-4446-4cb6-a76a-f9e445efc20a ']' 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73764 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73764 ']' 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73764 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73764 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73764' 00:07:45.147 killing process with pid 73764 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73764 00:07:45.147 [2024-12-07 16:33:43.990446] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.147 16:33:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73764 00:07:45.147 [2024-12-07 16:33:43.990665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.147 [2024-12-07 16:33:43.990734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.147 [2024-12-07 16:33:43.990796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:45.147 [2024-12-07 16:33:44.034659] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.718 16:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:45.718 00:07:45.718 real 0m3.609s 00:07:45.718 user 0m5.348s 00:07:45.718 sys 0m0.826s 00:07:45.718 16:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.718 16:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.718 ************************************ 00:07:45.718 END TEST raid_superblock_test 00:07:45.718 ************************************ 00:07:45.718 16:33:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:45.718 16:33:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:45.718 16:33:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.718 16:33:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.718 ************************************ 00:07:45.718 START TEST raid_read_error_test 00:07:45.718 ************************************ 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ADbJqCl33f 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73965 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73965 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73965 ']' 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.718 16:33:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.718 [2024-12-07 16:33:44.581853] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.718 [2024-12-07 16:33:44.582054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73965 ] 00:07:45.978 [2024-12-07 16:33:44.742115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.978 [2024-12-07 16:33:44.826268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.237 [2024-12-07 16:33:44.906938] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.237 [2024-12-07 16:33:44.907121] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.805 BaseBdev1_malloc 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.805 true 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.805 [2024-12-07 16:33:45.461128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:46.805 [2024-12-07 16:33:45.461210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.805 [2024-12-07 16:33:45.461247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:46.805 [2024-12-07 16:33:45.461257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.805 [2024-12-07 16:33:45.463814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.805 [2024-12-07 16:33:45.463853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:46.805 BaseBdev1 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.805 BaseBdev2_malloc 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.805 true 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.805 [2024-12-07 16:33:45.516727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:46.805 [2024-12-07 16:33:45.516808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.805 [2024-12-07 16:33:45.516832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:46.805 [2024-12-07 16:33:45.516842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.805 [2024-12-07 16:33:45.519335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.805 [2024-12-07 16:33:45.519381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:46.805 BaseBdev2 00:07:46.805 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.806 [2024-12-07 16:33:45.528753] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.806 [2024-12-07 16:33:45.530872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.806 [2024-12-07 16:33:45.531210] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:46.806 [2024-12-07 16:33:45.531234] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:46.806 [2024-12-07 16:33:45.531584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:46.806 [2024-12-07 16:33:45.531749] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:46.806 [2024-12-07 16:33:45.531762] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:46.806 [2024-12-07 16:33:45.531949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.806 "name": "raid_bdev1", 00:07:46.806 "uuid": "2532f6be-1d22-4e6a-b1cd-73c63f0fc344", 00:07:46.806 "strip_size_kb": 64, 00:07:46.806 "state": "online", 00:07:46.806 "raid_level": "concat", 00:07:46.806 "superblock": true, 00:07:46.806 "num_base_bdevs": 2, 00:07:46.806 "num_base_bdevs_discovered": 2, 00:07:46.806 "num_base_bdevs_operational": 2, 00:07:46.806 "base_bdevs_list": [ 00:07:46.806 { 00:07:46.806 "name": "BaseBdev1", 00:07:46.806 "uuid": "8a1d2893-e052-59be-8b2e-fbbc3d93cae0", 00:07:46.806 "is_configured": true, 00:07:46.806 "data_offset": 2048, 00:07:46.806 "data_size": 63488 00:07:46.806 }, 00:07:46.806 { 00:07:46.806 "name": "BaseBdev2", 00:07:46.806 "uuid": "ff8249e7-bde4-54cd-9e70-ada50d5428f4", 00:07:46.806 "is_configured": true, 00:07:46.806 "data_offset": 2048, 00:07:46.806 "data_size": 63488 00:07:46.806 } 00:07:46.806 ] 00:07:46.806 }' 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.806 16:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.374 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:47.374 16:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:47.374 [2024-12-07 16:33:46.060414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.312 16:33:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.312 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.312 16:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.312 "name": "raid_bdev1", 00:07:48.312 "uuid": "2532f6be-1d22-4e6a-b1cd-73c63f0fc344", 00:07:48.312 "strip_size_kb": 64, 00:07:48.312 "state": "online", 00:07:48.312 "raid_level": "concat", 00:07:48.312 "superblock": true, 00:07:48.312 "num_base_bdevs": 2, 00:07:48.312 "num_base_bdevs_discovered": 2, 00:07:48.312 "num_base_bdevs_operational": 2, 00:07:48.312 "base_bdevs_list": [ 00:07:48.312 { 00:07:48.312 "name": "BaseBdev1", 00:07:48.312 "uuid": "8a1d2893-e052-59be-8b2e-fbbc3d93cae0", 00:07:48.312 "is_configured": true, 00:07:48.312 "data_offset": 2048, 00:07:48.312 "data_size": 63488 00:07:48.312 }, 00:07:48.312 { 00:07:48.312 "name": "BaseBdev2", 00:07:48.312 "uuid": "ff8249e7-bde4-54cd-9e70-ada50d5428f4", 00:07:48.312 "is_configured": true, 00:07:48.312 "data_offset": 2048, 00:07:48.312 "data_size": 63488 00:07:48.312 } 00:07:48.312 ] 00:07:48.312 }' 00:07:48.312 16:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.312 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.573 [2024-12-07 16:33:47.400624] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:48.573 [2024-12-07 16:33:47.400674] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.573 [2024-12-07 16:33:47.403220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.573 [2024-12-07 16:33:47.403286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.573 [2024-12-07 16:33:47.403326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.573 [2024-12-07 16:33:47.403336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:48.573 { 00:07:48.573 "results": [ 00:07:48.573 { 00:07:48.573 "job": "raid_bdev1", 00:07:48.573 "core_mask": "0x1", 00:07:48.573 "workload": "randrw", 00:07:48.573 "percentage": 50, 00:07:48.573 "status": "finished", 00:07:48.573 "queue_depth": 1, 00:07:48.573 "io_size": 131072, 00:07:48.573 "runtime": 1.34065, 00:07:48.573 "iops": 14442.994070040651, 00:07:48.573 "mibps": 1805.3742587550814, 00:07:48.573 "io_failed": 1, 00:07:48.573 "io_timeout": 0, 00:07:48.573 "avg_latency_us": 97.50117166957276, 00:07:48.573 "min_latency_us": 24.258515283842794, 00:07:48.573 "max_latency_us": 1352.216593886463 00:07:48.573 } 00:07:48.573 ], 00:07:48.573 "core_count": 1 00:07:48.573 } 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73965 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73965 ']' 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73965 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73965 00:07:48.573 killing process with pid 73965 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73965' 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73965 00:07:48.573 [2024-12-07 16:33:47.448955] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.573 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73965 00:07:48.834 [2024-12-07 16:33:47.480237] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.095 16:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ADbJqCl33f 00:07:49.095 16:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:49.095 16:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:49.095 16:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:49.095 16:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:49.095 16:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:49.095 16:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:49.095 ************************************ 00:07:49.095 END TEST raid_read_error_test 00:07:49.095 ************************************ 00:07:49.095 16:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:49.095 00:07:49.095 real 0m3.391s 00:07:49.095 user 0m4.112s 00:07:49.095 sys 0m0.634s 00:07:49.095 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.095 16:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.095 16:33:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:49.095 16:33:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:49.095 16:33:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.095 16:33:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.095 ************************************ 00:07:49.095 START TEST raid_write_error_test 00:07:49.095 ************************************ 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ao8apIR4qL 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74099 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74099 00:07:49.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74099 ']' 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.095 16:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.356 [2024-12-07 16:33:48.044008] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:49.356 [2024-12-07 16:33:48.044146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74099 ] 00:07:49.356 [2024-12-07 16:33:48.205161] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.616 [2024-12-07 16:33:48.288302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.616 [2024-12-07 16:33:48.368441] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.616 [2024-12-07 16:33:48.368487] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.186 BaseBdev1_malloc 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.186 true 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.186 [2024-12-07 16:33:48.934377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:50.186 [2024-12-07 16:33:48.934543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.186 [2024-12-07 16:33:48.934572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:50.186 [2024-12-07 16:33:48.934583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.186 [2024-12-07 16:33:48.937175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.186 [2024-12-07 16:33:48.937215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:50.186 BaseBdev1 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.186 BaseBdev2_malloc 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.186 true 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.186 [2024-12-07 16:33:48.991246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:50.186 [2024-12-07 16:33:48.991330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.186 [2024-12-07 16:33:48.991370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:50.186 [2024-12-07 16:33:48.991381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.186 [2024-12-07 16:33:48.993862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.186 [2024-12-07 16:33:48.993901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:50.186 BaseBdev2 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.186 16:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.186 [2024-12-07 16:33:49.003262] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.186 [2024-12-07 16:33:49.005478] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.186 [2024-12-07 16:33:49.005676] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:50.186 [2024-12-07 16:33:49.005689] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:50.186 [2024-12-07 16:33:49.006016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:50.186 [2024-12-07 16:33:49.006169] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:50.186 [2024-12-07 16:33:49.006182] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:50.186 [2024-12-07 16:33:49.006390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.186 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.186 "name": "raid_bdev1", 00:07:50.186 "uuid": "37a11f54-5ec4-4029-bde1-a3fb5e120dee", 00:07:50.186 "strip_size_kb": 64, 00:07:50.186 "state": "online", 00:07:50.186 "raid_level": "concat", 00:07:50.186 "superblock": true, 00:07:50.186 "num_base_bdevs": 2, 00:07:50.186 "num_base_bdevs_discovered": 2, 00:07:50.186 "num_base_bdevs_operational": 2, 00:07:50.186 "base_bdevs_list": [ 00:07:50.186 { 00:07:50.186 "name": "BaseBdev1", 00:07:50.186 "uuid": "5c8866dd-1575-5c54-8bb1-d6a449d2dabe", 00:07:50.186 "is_configured": true, 00:07:50.186 "data_offset": 2048, 00:07:50.186 "data_size": 63488 00:07:50.186 }, 00:07:50.186 { 00:07:50.186 "name": "BaseBdev2", 00:07:50.186 "uuid": "7a967c3b-dfcc-5ebe-8874-e968098ad5a4", 00:07:50.186 "is_configured": true, 00:07:50.186 "data_offset": 2048, 00:07:50.187 "data_size": 63488 00:07:50.187 } 00:07:50.187 ] 00:07:50.187 }' 00:07:50.187 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.187 16:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.816 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:50.816 16:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:50.816 [2024-12-07 16:33:49.534829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:51.755 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:51.755 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.755 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.755 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.755 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.756 "name": "raid_bdev1", 00:07:51.756 "uuid": "37a11f54-5ec4-4029-bde1-a3fb5e120dee", 00:07:51.756 "strip_size_kb": 64, 00:07:51.756 "state": "online", 00:07:51.756 "raid_level": "concat", 00:07:51.756 "superblock": true, 00:07:51.756 "num_base_bdevs": 2, 00:07:51.756 "num_base_bdevs_discovered": 2, 00:07:51.756 "num_base_bdevs_operational": 2, 00:07:51.756 "base_bdevs_list": [ 00:07:51.756 { 00:07:51.756 "name": "BaseBdev1", 00:07:51.756 "uuid": "5c8866dd-1575-5c54-8bb1-d6a449d2dabe", 00:07:51.756 "is_configured": true, 00:07:51.756 "data_offset": 2048, 00:07:51.756 "data_size": 63488 00:07:51.756 }, 00:07:51.756 { 00:07:51.756 "name": "BaseBdev2", 00:07:51.756 "uuid": "7a967c3b-dfcc-5ebe-8874-e968098ad5a4", 00:07:51.756 "is_configured": true, 00:07:51.756 "data_offset": 2048, 00:07:51.756 "data_size": 63488 00:07:51.756 } 00:07:51.756 ] 00:07:51.756 }' 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.756 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.016 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.016 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.016 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.016 [2024-12-07 16:33:50.894897] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.016 [2024-12-07 16:33:50.894948] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.016 [2024-12-07 16:33:50.897403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.016 [2024-12-07 16:33:50.897450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.016 [2024-12-07 16:33:50.897487] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.016 [2024-12-07 16:33:50.897497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:52.016 { 00:07:52.016 "results": [ 00:07:52.016 { 00:07:52.016 "job": "raid_bdev1", 00:07:52.016 "core_mask": "0x1", 00:07:52.016 "workload": "randrw", 00:07:52.016 "percentage": 50, 00:07:52.016 "status": "finished", 00:07:52.016 "queue_depth": 1, 00:07:52.016 "io_size": 131072, 00:07:52.016 "runtime": 1.36047, 00:07:52.016 "iops": 15492.440112608143, 00:07:52.016 "mibps": 1936.5550140760179, 00:07:52.016 "io_failed": 1, 00:07:52.016 "io_timeout": 0, 00:07:52.016 "avg_latency_us": 90.43460931760633, 00:07:52.016 "min_latency_us": 24.705676855895195, 00:07:52.016 "max_latency_us": 1359.3711790393013 00:07:52.016 } 00:07:52.016 ], 00:07:52.016 "core_count": 1 00:07:52.016 } 00:07:52.016 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.016 16:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74099 00:07:52.016 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74099 ']' 00:07:52.016 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74099 00:07:52.016 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:52.016 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.016 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74099 00:07:52.276 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.276 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.276 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74099' 00:07:52.276 killing process with pid 74099 00:07:52.276 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74099 00:07:52.276 [2024-12-07 16:33:50.946581] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.276 16:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74099 00:07:52.276 [2024-12-07 16:33:50.975481] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.536 16:33:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ao8apIR4qL 00:07:52.536 16:33:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:52.536 16:33:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:52.536 16:33:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:52.536 16:33:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:52.536 16:33:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.536 16:33:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:52.536 16:33:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:52.536 00:07:52.536 real 0m3.415s 00:07:52.536 user 0m4.188s 00:07:52.536 sys 0m0.627s 00:07:52.536 ************************************ 00:07:52.536 END TEST raid_write_error_test 00:07:52.536 ************************************ 00:07:52.536 16:33:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.536 16:33:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.536 16:33:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:52.536 16:33:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:52.536 16:33:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:52.536 16:33:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.536 16:33:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.536 ************************************ 00:07:52.536 START TEST raid_state_function_test 00:07:52.536 ************************************ 00:07:52.536 16:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:52.536 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:52.536 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:52.536 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:52.536 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:52.796 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:52.796 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.796 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:52.796 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:52.796 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.796 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:52.796 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:52.796 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.796 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:52.796 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:52.797 Process raid pid: 74232 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74232 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74232' 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74232 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74232 ']' 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.797 16:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.797 [2024-12-07 16:33:51.522667] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:52.797 [2024-12-07 16:33:51.522799] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.797 [2024-12-07 16:33:51.682798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.057 [2024-12-07 16:33:51.753772] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.057 [2024-12-07 16:33:51.831403] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.057 [2024-12-07 16:33:51.831449] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.627 [2024-12-07 16:33:52.395632] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:53.627 [2024-12-07 16:33:52.395705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:53.627 [2024-12-07 16:33:52.395719] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.627 [2024-12-07 16:33:52.395730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.627 "name": "Existed_Raid", 00:07:53.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.627 "strip_size_kb": 0, 00:07:53.627 "state": "configuring", 00:07:53.627 "raid_level": "raid1", 00:07:53.627 "superblock": false, 00:07:53.627 "num_base_bdevs": 2, 00:07:53.627 "num_base_bdevs_discovered": 0, 00:07:53.627 "num_base_bdevs_operational": 2, 00:07:53.627 "base_bdevs_list": [ 00:07:53.627 { 00:07:53.627 "name": "BaseBdev1", 00:07:53.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.627 "is_configured": false, 00:07:53.627 "data_offset": 0, 00:07:53.627 "data_size": 0 00:07:53.627 }, 00:07:53.627 { 00:07:53.627 "name": "BaseBdev2", 00:07:53.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.627 "is_configured": false, 00:07:53.627 "data_offset": 0, 00:07:53.627 "data_size": 0 00:07:53.627 } 00:07:53.627 ] 00:07:53.627 }' 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.627 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.887 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:53.887 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.887 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.148 [2024-12-07 16:33:52.786834] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.148 [2024-12-07 16:33:52.787017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.148 [2024-12-07 16:33:52.798852] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:54.148 [2024-12-07 16:33:52.798936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:54.148 [2024-12-07 16:33:52.798990] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.148 [2024-12-07 16:33:52.799014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.148 [2024-12-07 16:33:52.826321] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.148 BaseBdev1 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.148 [ 00:07:54.148 { 00:07:54.148 "name": "BaseBdev1", 00:07:54.148 "aliases": [ 00:07:54.148 "0ca0b7f6-9222-4fca-a827-6ca861193156" 00:07:54.148 ], 00:07:54.148 "product_name": "Malloc disk", 00:07:54.148 "block_size": 512, 00:07:54.148 "num_blocks": 65536, 00:07:54.148 "uuid": "0ca0b7f6-9222-4fca-a827-6ca861193156", 00:07:54.148 "assigned_rate_limits": { 00:07:54.148 "rw_ios_per_sec": 0, 00:07:54.148 "rw_mbytes_per_sec": 0, 00:07:54.148 "r_mbytes_per_sec": 0, 00:07:54.148 "w_mbytes_per_sec": 0 00:07:54.148 }, 00:07:54.148 "claimed": true, 00:07:54.148 "claim_type": "exclusive_write", 00:07:54.148 "zoned": false, 00:07:54.148 "supported_io_types": { 00:07:54.148 "read": true, 00:07:54.148 "write": true, 00:07:54.148 "unmap": true, 00:07:54.148 "flush": true, 00:07:54.148 "reset": true, 00:07:54.148 "nvme_admin": false, 00:07:54.148 "nvme_io": false, 00:07:54.148 "nvme_io_md": false, 00:07:54.148 "write_zeroes": true, 00:07:54.148 "zcopy": true, 00:07:54.148 "get_zone_info": false, 00:07:54.148 "zone_management": false, 00:07:54.148 "zone_append": false, 00:07:54.148 "compare": false, 00:07:54.148 "compare_and_write": false, 00:07:54.148 "abort": true, 00:07:54.148 "seek_hole": false, 00:07:54.148 "seek_data": false, 00:07:54.148 "copy": true, 00:07:54.148 "nvme_iov_md": false 00:07:54.148 }, 00:07:54.148 "memory_domains": [ 00:07:54.148 { 00:07:54.148 "dma_device_id": "system", 00:07:54.148 "dma_device_type": 1 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.148 "dma_device_type": 2 00:07:54.148 } 00:07:54.148 ], 00:07:54.148 "driver_specific": {} 00:07:54.148 } 00:07:54.148 ] 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.148 "name": "Existed_Raid", 00:07:54.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.148 "strip_size_kb": 0, 00:07:54.148 "state": "configuring", 00:07:54.148 "raid_level": "raid1", 00:07:54.148 "superblock": false, 00:07:54.148 "num_base_bdevs": 2, 00:07:54.148 "num_base_bdevs_discovered": 1, 00:07:54.148 "num_base_bdevs_operational": 2, 00:07:54.148 "base_bdevs_list": [ 00:07:54.148 { 00:07:54.148 "name": "BaseBdev1", 00:07:54.148 "uuid": "0ca0b7f6-9222-4fca-a827-6ca861193156", 00:07:54.148 "is_configured": true, 00:07:54.148 "data_offset": 0, 00:07:54.148 "data_size": 65536 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "name": "BaseBdev2", 00:07:54.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.148 "is_configured": false, 00:07:54.148 "data_offset": 0, 00:07:54.148 "data_size": 0 00:07:54.148 } 00:07:54.148 ] 00:07:54.148 }' 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.148 16:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.717 [2024-12-07 16:33:53.317538] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.717 [2024-12-07 16:33:53.317686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.717 [2024-12-07 16:33:53.325509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.717 [2024-12-07 16:33:53.327679] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.717 [2024-12-07 16:33:53.327722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.717 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.717 "name": "Existed_Raid", 00:07:54.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.717 "strip_size_kb": 0, 00:07:54.717 "state": "configuring", 00:07:54.717 "raid_level": "raid1", 00:07:54.717 "superblock": false, 00:07:54.717 "num_base_bdevs": 2, 00:07:54.717 "num_base_bdevs_discovered": 1, 00:07:54.717 "num_base_bdevs_operational": 2, 00:07:54.717 "base_bdevs_list": [ 00:07:54.717 { 00:07:54.717 "name": "BaseBdev1", 00:07:54.717 "uuid": "0ca0b7f6-9222-4fca-a827-6ca861193156", 00:07:54.717 "is_configured": true, 00:07:54.717 "data_offset": 0, 00:07:54.717 "data_size": 65536 00:07:54.717 }, 00:07:54.717 { 00:07:54.717 "name": "BaseBdev2", 00:07:54.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.717 "is_configured": false, 00:07:54.717 "data_offset": 0, 00:07:54.717 "data_size": 0 00:07:54.717 } 00:07:54.718 ] 00:07:54.718 }' 00:07:54.718 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.718 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.977 [2024-12-07 16:33:53.797758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.977 [2024-12-07 16:33:53.798076] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:54.977 [2024-12-07 16:33:53.798167] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:54.977 [2024-12-07 16:33:53.799130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:54.977 [2024-12-07 16:33:53.799661] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:54.977 [2024-12-07 16:33:53.799800] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:54.977 [2024-12-07 16:33:53.800425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.977 BaseBdev2 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.977 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.978 [ 00:07:54.978 { 00:07:54.978 "name": "BaseBdev2", 00:07:54.978 "aliases": [ 00:07:54.978 "0ab89c2b-49ea-4017-9199-42e26a29b7e4" 00:07:54.978 ], 00:07:54.978 "product_name": "Malloc disk", 00:07:54.978 "block_size": 512, 00:07:54.978 "num_blocks": 65536, 00:07:54.978 "uuid": "0ab89c2b-49ea-4017-9199-42e26a29b7e4", 00:07:54.978 "assigned_rate_limits": { 00:07:54.978 "rw_ios_per_sec": 0, 00:07:54.978 "rw_mbytes_per_sec": 0, 00:07:54.978 "r_mbytes_per_sec": 0, 00:07:54.978 "w_mbytes_per_sec": 0 00:07:54.978 }, 00:07:54.978 "claimed": true, 00:07:54.978 "claim_type": "exclusive_write", 00:07:54.978 "zoned": false, 00:07:54.978 "supported_io_types": { 00:07:54.978 "read": true, 00:07:54.978 "write": true, 00:07:54.978 "unmap": true, 00:07:54.978 "flush": true, 00:07:54.978 "reset": true, 00:07:54.978 "nvme_admin": false, 00:07:54.978 "nvme_io": false, 00:07:54.978 "nvme_io_md": false, 00:07:54.978 "write_zeroes": true, 00:07:54.978 "zcopy": true, 00:07:54.978 "get_zone_info": false, 00:07:54.978 "zone_management": false, 00:07:54.978 "zone_append": false, 00:07:54.978 "compare": false, 00:07:54.978 "compare_and_write": false, 00:07:54.978 "abort": true, 00:07:54.978 "seek_hole": false, 00:07:54.978 "seek_data": false, 00:07:54.978 "copy": true, 00:07:54.978 "nvme_iov_md": false 00:07:54.978 }, 00:07:54.978 "memory_domains": [ 00:07:54.978 { 00:07:54.978 "dma_device_id": "system", 00:07:54.978 "dma_device_type": 1 00:07:54.978 }, 00:07:54.978 { 00:07:54.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.978 "dma_device_type": 2 00:07:54.978 } 00:07:54.978 ], 00:07:54.978 "driver_specific": {} 00:07:54.978 } 00:07:54.978 ] 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.978 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.237 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.237 "name": "Existed_Raid", 00:07:55.237 "uuid": "a29f37ec-65cd-421f-8b24-e2e306d6f757", 00:07:55.237 "strip_size_kb": 0, 00:07:55.237 "state": "online", 00:07:55.237 "raid_level": "raid1", 00:07:55.237 "superblock": false, 00:07:55.237 "num_base_bdevs": 2, 00:07:55.237 "num_base_bdevs_discovered": 2, 00:07:55.237 "num_base_bdevs_operational": 2, 00:07:55.237 "base_bdevs_list": [ 00:07:55.237 { 00:07:55.237 "name": "BaseBdev1", 00:07:55.237 "uuid": "0ca0b7f6-9222-4fca-a827-6ca861193156", 00:07:55.237 "is_configured": true, 00:07:55.237 "data_offset": 0, 00:07:55.237 "data_size": 65536 00:07:55.237 }, 00:07:55.237 { 00:07:55.237 "name": "BaseBdev2", 00:07:55.237 "uuid": "0ab89c2b-49ea-4017-9199-42e26a29b7e4", 00:07:55.237 "is_configured": true, 00:07:55.237 "data_offset": 0, 00:07:55.237 "data_size": 65536 00:07:55.237 } 00:07:55.237 ] 00:07:55.237 }' 00:07:55.237 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.237 16:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.497 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:55.497 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:55.497 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.497 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.497 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.497 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.497 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:55.497 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.497 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.497 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.497 [2024-12-07 16:33:54.265341] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.497 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.497 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.497 "name": "Existed_Raid", 00:07:55.497 "aliases": [ 00:07:55.497 "a29f37ec-65cd-421f-8b24-e2e306d6f757" 00:07:55.497 ], 00:07:55.497 "product_name": "Raid Volume", 00:07:55.498 "block_size": 512, 00:07:55.498 "num_blocks": 65536, 00:07:55.498 "uuid": "a29f37ec-65cd-421f-8b24-e2e306d6f757", 00:07:55.498 "assigned_rate_limits": { 00:07:55.498 "rw_ios_per_sec": 0, 00:07:55.498 "rw_mbytes_per_sec": 0, 00:07:55.498 "r_mbytes_per_sec": 0, 00:07:55.498 "w_mbytes_per_sec": 0 00:07:55.498 }, 00:07:55.498 "claimed": false, 00:07:55.498 "zoned": false, 00:07:55.498 "supported_io_types": { 00:07:55.498 "read": true, 00:07:55.498 "write": true, 00:07:55.498 "unmap": false, 00:07:55.498 "flush": false, 00:07:55.498 "reset": true, 00:07:55.498 "nvme_admin": false, 00:07:55.498 "nvme_io": false, 00:07:55.498 "nvme_io_md": false, 00:07:55.498 "write_zeroes": true, 00:07:55.498 "zcopy": false, 00:07:55.498 "get_zone_info": false, 00:07:55.498 "zone_management": false, 00:07:55.498 "zone_append": false, 00:07:55.498 "compare": false, 00:07:55.498 "compare_and_write": false, 00:07:55.498 "abort": false, 00:07:55.498 "seek_hole": false, 00:07:55.498 "seek_data": false, 00:07:55.498 "copy": false, 00:07:55.498 "nvme_iov_md": false 00:07:55.498 }, 00:07:55.498 "memory_domains": [ 00:07:55.498 { 00:07:55.498 "dma_device_id": "system", 00:07:55.498 "dma_device_type": 1 00:07:55.498 }, 00:07:55.498 { 00:07:55.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.498 "dma_device_type": 2 00:07:55.498 }, 00:07:55.498 { 00:07:55.498 "dma_device_id": "system", 00:07:55.498 "dma_device_type": 1 00:07:55.498 }, 00:07:55.498 { 00:07:55.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.498 "dma_device_type": 2 00:07:55.498 } 00:07:55.498 ], 00:07:55.498 "driver_specific": { 00:07:55.498 "raid": { 00:07:55.498 "uuid": "a29f37ec-65cd-421f-8b24-e2e306d6f757", 00:07:55.498 "strip_size_kb": 0, 00:07:55.498 "state": "online", 00:07:55.498 "raid_level": "raid1", 00:07:55.498 "superblock": false, 00:07:55.498 "num_base_bdevs": 2, 00:07:55.498 "num_base_bdevs_discovered": 2, 00:07:55.498 "num_base_bdevs_operational": 2, 00:07:55.498 "base_bdevs_list": [ 00:07:55.498 { 00:07:55.498 "name": "BaseBdev1", 00:07:55.498 "uuid": "0ca0b7f6-9222-4fca-a827-6ca861193156", 00:07:55.498 "is_configured": true, 00:07:55.498 "data_offset": 0, 00:07:55.498 "data_size": 65536 00:07:55.498 }, 00:07:55.498 { 00:07:55.498 "name": "BaseBdev2", 00:07:55.498 "uuid": "0ab89c2b-49ea-4017-9199-42e26a29b7e4", 00:07:55.498 "is_configured": true, 00:07:55.498 "data_offset": 0, 00:07:55.498 "data_size": 65536 00:07:55.498 } 00:07:55.498 ] 00:07:55.498 } 00:07:55.498 } 00:07:55.498 }' 00:07:55.498 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.498 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:55.498 BaseBdev2' 00:07:55.498 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.758 [2024-12-07 16:33:54.508642] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.758 "name": "Existed_Raid", 00:07:55.758 "uuid": "a29f37ec-65cd-421f-8b24-e2e306d6f757", 00:07:55.758 "strip_size_kb": 0, 00:07:55.758 "state": "online", 00:07:55.758 "raid_level": "raid1", 00:07:55.758 "superblock": false, 00:07:55.758 "num_base_bdevs": 2, 00:07:55.758 "num_base_bdevs_discovered": 1, 00:07:55.758 "num_base_bdevs_operational": 1, 00:07:55.758 "base_bdevs_list": [ 00:07:55.758 { 00:07:55.758 "name": null, 00:07:55.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.758 "is_configured": false, 00:07:55.758 "data_offset": 0, 00:07:55.758 "data_size": 65536 00:07:55.758 }, 00:07:55.758 { 00:07:55.758 "name": "BaseBdev2", 00:07:55.758 "uuid": "0ab89c2b-49ea-4017-9199-42e26a29b7e4", 00:07:55.758 "is_configured": true, 00:07:55.758 "data_offset": 0, 00:07:55.758 "data_size": 65536 00:07:55.758 } 00:07:55.758 ] 00:07:55.758 }' 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.758 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.329 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:56.329 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:56.329 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.329 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.329 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.329 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:56.329 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.329 [2024-12-07 16:33:55.013270] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:56.329 [2024-12-07 16:33:55.013515] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.329 [2024-12-07 16:33:55.033320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.329 [2024-12-07 16:33:55.033448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.329 [2024-12-07 16:33:55.033497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74232 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74232 ']' 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74232 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74232 00:07:56.329 killing process with pid 74232 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74232' 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74232 00:07:56.329 [2024-12-07 16:33:55.116866] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.329 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74232 00:07:56.329 [2024-12-07 16:33:55.118543] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:56.899 00:07:56.899 real 0m4.063s 00:07:56.899 user 0m6.191s 00:07:56.899 sys 0m0.878s 00:07:56.899 ************************************ 00:07:56.899 END TEST raid_state_function_test 00:07:56.899 ************************************ 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.899 16:33:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:56.899 16:33:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:56.899 16:33:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.899 16:33:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.899 ************************************ 00:07:56.899 START TEST raid_state_function_test_sb 00:07:56.899 ************************************ 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74474 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74474' 00:07:56.899 Process raid pid: 74474 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74474 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74474 ']' 00:07:56.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.899 16:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.899 [2024-12-07 16:33:55.650309] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:56.899 [2024-12-07 16:33:55.650561] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.159 [2024-12-07 16:33:55.813140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.159 [2024-12-07 16:33:55.894584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.159 [2024-12-07 16:33:55.974550] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.159 [2024-12-07 16:33:55.974691] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.728 [2024-12-07 16:33:56.544146] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.728 [2024-12-07 16:33:56.544234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.728 [2024-12-07 16:33:56.544260] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.728 [2024-12-07 16:33:56.544271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.728 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.729 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.729 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.729 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.729 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.729 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.729 "name": "Existed_Raid", 00:07:57.729 "uuid": "609ddb28-dd8f-42b2-bd28-58a4b0788c1c", 00:07:57.729 "strip_size_kb": 0, 00:07:57.729 "state": "configuring", 00:07:57.729 "raid_level": "raid1", 00:07:57.729 "superblock": true, 00:07:57.729 "num_base_bdevs": 2, 00:07:57.729 "num_base_bdevs_discovered": 0, 00:07:57.729 "num_base_bdevs_operational": 2, 00:07:57.729 "base_bdevs_list": [ 00:07:57.729 { 00:07:57.729 "name": "BaseBdev1", 00:07:57.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.729 "is_configured": false, 00:07:57.729 "data_offset": 0, 00:07:57.729 "data_size": 0 00:07:57.729 }, 00:07:57.729 { 00:07:57.729 "name": "BaseBdev2", 00:07:57.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.729 "is_configured": false, 00:07:57.729 "data_offset": 0, 00:07:57.729 "data_size": 0 00:07:57.729 } 00:07:57.729 ] 00:07:57.729 }' 00:07:57.729 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.729 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.297 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.297 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.297 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.297 [2024-12-07 16:33:56.967325] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.297 [2024-12-07 16:33:56.967511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:58.297 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.297 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:58.297 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.297 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.297 [2024-12-07 16:33:56.975308] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:58.297 [2024-12-07 16:33:56.975407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:58.297 [2024-12-07 16:33:56.975439] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.297 [2024-12-07 16:33:56.975463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.297 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.297 16:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:58.297 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.297 16:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.297 [2024-12-07 16:33:57.002799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.297 BaseBdev1 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.297 [ 00:07:58.297 { 00:07:58.297 "name": "BaseBdev1", 00:07:58.297 "aliases": [ 00:07:58.297 "e8ce3b63-52d1-45a1-b535-17d67c0d1188" 00:07:58.297 ], 00:07:58.297 "product_name": "Malloc disk", 00:07:58.297 "block_size": 512, 00:07:58.297 "num_blocks": 65536, 00:07:58.297 "uuid": "e8ce3b63-52d1-45a1-b535-17d67c0d1188", 00:07:58.297 "assigned_rate_limits": { 00:07:58.297 "rw_ios_per_sec": 0, 00:07:58.297 "rw_mbytes_per_sec": 0, 00:07:58.297 "r_mbytes_per_sec": 0, 00:07:58.297 "w_mbytes_per_sec": 0 00:07:58.297 }, 00:07:58.297 "claimed": true, 00:07:58.297 "claim_type": "exclusive_write", 00:07:58.297 "zoned": false, 00:07:58.297 "supported_io_types": { 00:07:58.297 "read": true, 00:07:58.297 "write": true, 00:07:58.297 "unmap": true, 00:07:58.297 "flush": true, 00:07:58.297 "reset": true, 00:07:58.297 "nvme_admin": false, 00:07:58.297 "nvme_io": false, 00:07:58.297 "nvme_io_md": false, 00:07:58.297 "write_zeroes": true, 00:07:58.297 "zcopy": true, 00:07:58.297 "get_zone_info": false, 00:07:58.297 "zone_management": false, 00:07:58.297 "zone_append": false, 00:07:58.297 "compare": false, 00:07:58.297 "compare_and_write": false, 00:07:58.297 "abort": true, 00:07:58.297 "seek_hole": false, 00:07:58.297 "seek_data": false, 00:07:58.297 "copy": true, 00:07:58.297 "nvme_iov_md": false 00:07:58.297 }, 00:07:58.297 "memory_domains": [ 00:07:58.297 { 00:07:58.297 "dma_device_id": "system", 00:07:58.297 "dma_device_type": 1 00:07:58.297 }, 00:07:58.297 { 00:07:58.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.297 "dma_device_type": 2 00:07:58.297 } 00:07:58.297 ], 00:07:58.297 "driver_specific": {} 00:07:58.297 } 00:07:58.297 ] 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.297 "name": "Existed_Raid", 00:07:58.297 "uuid": "a6442e4e-4a89-4325-9219-fc82fc8e102b", 00:07:58.297 "strip_size_kb": 0, 00:07:58.297 "state": "configuring", 00:07:58.297 "raid_level": "raid1", 00:07:58.297 "superblock": true, 00:07:58.297 "num_base_bdevs": 2, 00:07:58.297 "num_base_bdevs_discovered": 1, 00:07:58.297 "num_base_bdevs_operational": 2, 00:07:58.297 "base_bdevs_list": [ 00:07:58.297 { 00:07:58.297 "name": "BaseBdev1", 00:07:58.297 "uuid": "e8ce3b63-52d1-45a1-b535-17d67c0d1188", 00:07:58.297 "is_configured": true, 00:07:58.297 "data_offset": 2048, 00:07:58.297 "data_size": 63488 00:07:58.297 }, 00:07:58.297 { 00:07:58.297 "name": "BaseBdev2", 00:07:58.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.297 "is_configured": false, 00:07:58.297 "data_offset": 0, 00:07:58.297 "data_size": 0 00:07:58.297 } 00:07:58.297 ] 00:07:58.297 }' 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.297 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.961 [2024-12-07 16:33:57.525971] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.961 [2024-12-07 16:33:57.526122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.961 [2024-12-07 16:33:57.537966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.961 [2024-12-07 16:33:57.540175] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.961 [2024-12-07 16:33:57.540263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.961 "name": "Existed_Raid", 00:07:58.961 "uuid": "cca0c432-00b8-4a18-940c-5bbf6bce5f6b", 00:07:58.961 "strip_size_kb": 0, 00:07:58.961 "state": "configuring", 00:07:58.961 "raid_level": "raid1", 00:07:58.961 "superblock": true, 00:07:58.961 "num_base_bdevs": 2, 00:07:58.961 "num_base_bdevs_discovered": 1, 00:07:58.961 "num_base_bdevs_operational": 2, 00:07:58.961 "base_bdevs_list": [ 00:07:58.961 { 00:07:58.961 "name": "BaseBdev1", 00:07:58.961 "uuid": "e8ce3b63-52d1-45a1-b535-17d67c0d1188", 00:07:58.961 "is_configured": true, 00:07:58.961 "data_offset": 2048, 00:07:58.961 "data_size": 63488 00:07:58.961 }, 00:07:58.961 { 00:07:58.961 "name": "BaseBdev2", 00:07:58.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.961 "is_configured": false, 00:07:58.961 "data_offset": 0, 00:07:58.961 "data_size": 0 00:07:58.961 } 00:07:58.961 ] 00:07:58.961 }' 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.961 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.221 [2024-12-07 16:33:57.979631] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.221 [2024-12-07 16:33:57.980433] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:59.221 [2024-12-07 16:33:57.980497] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:59.221 BaseBdev2 00:07:59.221 [2024-12-07 16:33:57.981566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:59.221 [2024-12-07 16:33:57.982047] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:59.221 [2024-12-07 16:33:57.982118] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:59.221 [2024-12-07 16:33:57.982620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.221 16:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.221 [ 00:07:59.221 { 00:07:59.221 "name": "BaseBdev2", 00:07:59.221 "aliases": [ 00:07:59.221 "0dfe9405-f634-4711-aaec-00ac9ae4ae10" 00:07:59.221 ], 00:07:59.221 "product_name": "Malloc disk", 00:07:59.221 "block_size": 512, 00:07:59.221 "num_blocks": 65536, 00:07:59.221 "uuid": "0dfe9405-f634-4711-aaec-00ac9ae4ae10", 00:07:59.221 "assigned_rate_limits": { 00:07:59.221 "rw_ios_per_sec": 0, 00:07:59.221 "rw_mbytes_per_sec": 0, 00:07:59.221 "r_mbytes_per_sec": 0, 00:07:59.221 "w_mbytes_per_sec": 0 00:07:59.221 }, 00:07:59.221 "claimed": true, 00:07:59.221 "claim_type": "exclusive_write", 00:07:59.221 "zoned": false, 00:07:59.221 "supported_io_types": { 00:07:59.221 "read": true, 00:07:59.221 "write": true, 00:07:59.221 "unmap": true, 00:07:59.221 "flush": true, 00:07:59.221 "reset": true, 00:07:59.221 "nvme_admin": false, 00:07:59.221 "nvme_io": false, 00:07:59.221 "nvme_io_md": false, 00:07:59.221 "write_zeroes": true, 00:07:59.221 "zcopy": true, 00:07:59.221 "get_zone_info": false, 00:07:59.221 "zone_management": false, 00:07:59.221 "zone_append": false, 00:07:59.221 "compare": false, 00:07:59.221 "compare_and_write": false, 00:07:59.221 "abort": true, 00:07:59.221 "seek_hole": false, 00:07:59.221 "seek_data": false, 00:07:59.221 "copy": true, 00:07:59.221 "nvme_iov_md": false 00:07:59.221 }, 00:07:59.221 "memory_domains": [ 00:07:59.221 { 00:07:59.221 "dma_device_id": "system", 00:07:59.221 "dma_device_type": 1 00:07:59.221 }, 00:07:59.221 { 00:07:59.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.221 "dma_device_type": 2 00:07:59.221 } 00:07:59.221 ], 00:07:59.221 "driver_specific": {} 00:07:59.221 } 00:07:59.221 ] 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.221 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.221 "name": "Existed_Raid", 00:07:59.221 "uuid": "cca0c432-00b8-4a18-940c-5bbf6bce5f6b", 00:07:59.221 "strip_size_kb": 0, 00:07:59.221 "state": "online", 00:07:59.221 "raid_level": "raid1", 00:07:59.221 "superblock": true, 00:07:59.221 "num_base_bdevs": 2, 00:07:59.221 "num_base_bdevs_discovered": 2, 00:07:59.221 "num_base_bdevs_operational": 2, 00:07:59.221 "base_bdevs_list": [ 00:07:59.221 { 00:07:59.221 "name": "BaseBdev1", 00:07:59.221 "uuid": "e8ce3b63-52d1-45a1-b535-17d67c0d1188", 00:07:59.221 "is_configured": true, 00:07:59.221 "data_offset": 2048, 00:07:59.221 "data_size": 63488 00:07:59.221 }, 00:07:59.221 { 00:07:59.221 "name": "BaseBdev2", 00:07:59.221 "uuid": "0dfe9405-f634-4711-aaec-00ac9ae4ae10", 00:07:59.221 "is_configured": true, 00:07:59.221 "data_offset": 2048, 00:07:59.221 "data_size": 63488 00:07:59.221 } 00:07:59.221 ] 00:07:59.221 }' 00:07:59.222 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.222 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.791 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:59.791 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:59.791 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.791 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.791 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.791 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.791 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.791 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:59.791 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.791 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.791 [2024-12-07 16:33:58.451163] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.791 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.791 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.791 "name": "Existed_Raid", 00:07:59.791 "aliases": [ 00:07:59.791 "cca0c432-00b8-4a18-940c-5bbf6bce5f6b" 00:07:59.791 ], 00:07:59.791 "product_name": "Raid Volume", 00:07:59.791 "block_size": 512, 00:07:59.792 "num_blocks": 63488, 00:07:59.792 "uuid": "cca0c432-00b8-4a18-940c-5bbf6bce5f6b", 00:07:59.792 "assigned_rate_limits": { 00:07:59.792 "rw_ios_per_sec": 0, 00:07:59.792 "rw_mbytes_per_sec": 0, 00:07:59.792 "r_mbytes_per_sec": 0, 00:07:59.792 "w_mbytes_per_sec": 0 00:07:59.792 }, 00:07:59.792 "claimed": false, 00:07:59.792 "zoned": false, 00:07:59.792 "supported_io_types": { 00:07:59.792 "read": true, 00:07:59.792 "write": true, 00:07:59.792 "unmap": false, 00:07:59.792 "flush": false, 00:07:59.792 "reset": true, 00:07:59.792 "nvme_admin": false, 00:07:59.792 "nvme_io": false, 00:07:59.792 "nvme_io_md": false, 00:07:59.792 "write_zeroes": true, 00:07:59.792 "zcopy": false, 00:07:59.792 "get_zone_info": false, 00:07:59.792 "zone_management": false, 00:07:59.792 "zone_append": false, 00:07:59.792 "compare": false, 00:07:59.792 "compare_and_write": false, 00:07:59.792 "abort": false, 00:07:59.792 "seek_hole": false, 00:07:59.792 "seek_data": false, 00:07:59.792 "copy": false, 00:07:59.792 "nvme_iov_md": false 00:07:59.792 }, 00:07:59.792 "memory_domains": [ 00:07:59.792 { 00:07:59.792 "dma_device_id": "system", 00:07:59.792 "dma_device_type": 1 00:07:59.792 }, 00:07:59.792 { 00:07:59.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.792 "dma_device_type": 2 00:07:59.792 }, 00:07:59.792 { 00:07:59.792 "dma_device_id": "system", 00:07:59.792 "dma_device_type": 1 00:07:59.792 }, 00:07:59.792 { 00:07:59.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.792 "dma_device_type": 2 00:07:59.792 } 00:07:59.792 ], 00:07:59.792 "driver_specific": { 00:07:59.792 "raid": { 00:07:59.792 "uuid": "cca0c432-00b8-4a18-940c-5bbf6bce5f6b", 00:07:59.792 "strip_size_kb": 0, 00:07:59.792 "state": "online", 00:07:59.792 "raid_level": "raid1", 00:07:59.792 "superblock": true, 00:07:59.792 "num_base_bdevs": 2, 00:07:59.792 "num_base_bdevs_discovered": 2, 00:07:59.792 "num_base_bdevs_operational": 2, 00:07:59.792 "base_bdevs_list": [ 00:07:59.792 { 00:07:59.792 "name": "BaseBdev1", 00:07:59.792 "uuid": "e8ce3b63-52d1-45a1-b535-17d67c0d1188", 00:07:59.792 "is_configured": true, 00:07:59.792 "data_offset": 2048, 00:07:59.792 "data_size": 63488 00:07:59.792 }, 00:07:59.792 { 00:07:59.792 "name": "BaseBdev2", 00:07:59.792 "uuid": "0dfe9405-f634-4711-aaec-00ac9ae4ae10", 00:07:59.792 "is_configured": true, 00:07:59.792 "data_offset": 2048, 00:07:59.792 "data_size": 63488 00:07:59.792 } 00:07:59.792 ] 00:07:59.792 } 00:07:59.792 } 00:07:59.792 }' 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:59.792 BaseBdev2' 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.792 [2024-12-07 16:33:58.646613] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.792 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.052 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.052 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.052 "name": "Existed_Raid", 00:08:00.052 "uuid": "cca0c432-00b8-4a18-940c-5bbf6bce5f6b", 00:08:00.052 "strip_size_kb": 0, 00:08:00.052 "state": "online", 00:08:00.052 "raid_level": "raid1", 00:08:00.052 "superblock": true, 00:08:00.052 "num_base_bdevs": 2, 00:08:00.052 "num_base_bdevs_discovered": 1, 00:08:00.052 "num_base_bdevs_operational": 1, 00:08:00.052 "base_bdevs_list": [ 00:08:00.052 { 00:08:00.052 "name": null, 00:08:00.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.052 "is_configured": false, 00:08:00.052 "data_offset": 0, 00:08:00.052 "data_size": 63488 00:08:00.052 }, 00:08:00.052 { 00:08:00.052 "name": "BaseBdev2", 00:08:00.052 "uuid": "0dfe9405-f634-4711-aaec-00ac9ae4ae10", 00:08:00.052 "is_configured": true, 00:08:00.052 "data_offset": 2048, 00:08:00.052 "data_size": 63488 00:08:00.052 } 00:08:00.052 ] 00:08:00.052 }' 00:08:00.052 16:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.052 16:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.312 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:00.312 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.312 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.312 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.312 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.312 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:00.312 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.312 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:00.312 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:00.312 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:00.312 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.312 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.312 [2024-12-07 16:33:59.133829] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:00.312 [2024-12-07 16:33:59.133955] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.312 [2024-12-07 16:33:59.155093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.312 [2024-12-07 16:33:59.155204] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.312 [2024-12-07 16:33:59.155248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74474 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74474 ']' 00:08:00.313 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74474 00:08:00.573 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:00.573 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.573 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74474 00:08:00.573 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:00.573 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:00.573 killing process with pid 74474 00:08:00.573 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74474' 00:08:00.573 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74474 00:08:00.573 [2024-12-07 16:33:59.256190] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.573 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74474 00:08:00.573 [2024-12-07 16:33:59.257716] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.833 ************************************ 00:08:00.833 END TEST raid_state_function_test_sb 00:08:00.833 ************************************ 00:08:00.833 16:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:00.833 00:08:00.833 real 0m4.069s 00:08:00.833 user 0m6.202s 00:08:00.833 sys 0m0.875s 00:08:00.833 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.833 16:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.833 16:33:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:00.833 16:33:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:00.833 16:33:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.833 16:33:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.833 ************************************ 00:08:00.833 START TEST raid_superblock_test 00:08:00.833 ************************************ 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74715 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74715 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74715 ']' 00:08:00.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.833 16:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.093 [2024-12-07 16:33:59.789244] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:01.093 [2024-12-07 16:33:59.789411] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74715 ] 00:08:01.093 [2024-12-07 16:33:59.950190] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.352 [2024-12-07 16:34:00.019921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.352 [2024-12-07 16:34:00.097673] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.352 [2024-12-07 16:34:00.097718] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.923 malloc1 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.923 [2024-12-07 16:34:00.645457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:01.923 [2024-12-07 16:34:00.645613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.923 [2024-12-07 16:34:00.645663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:01.923 [2024-12-07 16:34:00.645702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.923 [2024-12-07 16:34:00.648126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.923 [2024-12-07 16:34:00.648204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:01.923 pt1 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.923 malloc2 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.923 [2024-12-07 16:34:00.695515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:01.923 [2024-12-07 16:34:00.695586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.923 [2024-12-07 16:34:00.695607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:01.923 [2024-12-07 16:34:00.695622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.923 [2024-12-07 16:34:00.698533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.923 [2024-12-07 16:34:00.698573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:01.923 pt2 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.923 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.923 [2024-12-07 16:34:00.707531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:01.923 [2024-12-07 16:34:00.709738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:01.923 [2024-12-07 16:34:00.709951] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:01.923 [2024-12-07 16:34:00.709971] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:01.923 [2024-12-07 16:34:00.710247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:01.923 [2024-12-07 16:34:00.710399] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:01.923 [2024-12-07 16:34:00.710409] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:01.923 [2024-12-07 16:34:00.710575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.924 "name": "raid_bdev1", 00:08:01.924 "uuid": "66548137-7f3e-42b1-9eb0-7edfa0d7482f", 00:08:01.924 "strip_size_kb": 0, 00:08:01.924 "state": "online", 00:08:01.924 "raid_level": "raid1", 00:08:01.924 "superblock": true, 00:08:01.924 "num_base_bdevs": 2, 00:08:01.924 "num_base_bdevs_discovered": 2, 00:08:01.924 "num_base_bdevs_operational": 2, 00:08:01.924 "base_bdevs_list": [ 00:08:01.924 { 00:08:01.924 "name": "pt1", 00:08:01.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:01.924 "is_configured": true, 00:08:01.924 "data_offset": 2048, 00:08:01.924 "data_size": 63488 00:08:01.924 }, 00:08:01.924 { 00:08:01.924 "name": "pt2", 00:08:01.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.924 "is_configured": true, 00:08:01.924 "data_offset": 2048, 00:08:01.924 "data_size": 63488 00:08:01.924 } 00:08:01.924 ] 00:08:01.924 }' 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.924 16:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.492 [2024-12-07 16:34:01.163088] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.492 "name": "raid_bdev1", 00:08:02.492 "aliases": [ 00:08:02.492 "66548137-7f3e-42b1-9eb0-7edfa0d7482f" 00:08:02.492 ], 00:08:02.492 "product_name": "Raid Volume", 00:08:02.492 "block_size": 512, 00:08:02.492 "num_blocks": 63488, 00:08:02.492 "uuid": "66548137-7f3e-42b1-9eb0-7edfa0d7482f", 00:08:02.492 "assigned_rate_limits": { 00:08:02.492 "rw_ios_per_sec": 0, 00:08:02.492 "rw_mbytes_per_sec": 0, 00:08:02.492 "r_mbytes_per_sec": 0, 00:08:02.492 "w_mbytes_per_sec": 0 00:08:02.492 }, 00:08:02.492 "claimed": false, 00:08:02.492 "zoned": false, 00:08:02.492 "supported_io_types": { 00:08:02.492 "read": true, 00:08:02.492 "write": true, 00:08:02.492 "unmap": false, 00:08:02.492 "flush": false, 00:08:02.492 "reset": true, 00:08:02.492 "nvme_admin": false, 00:08:02.492 "nvme_io": false, 00:08:02.492 "nvme_io_md": false, 00:08:02.492 "write_zeroes": true, 00:08:02.492 "zcopy": false, 00:08:02.492 "get_zone_info": false, 00:08:02.492 "zone_management": false, 00:08:02.492 "zone_append": false, 00:08:02.492 "compare": false, 00:08:02.492 "compare_and_write": false, 00:08:02.492 "abort": false, 00:08:02.492 "seek_hole": false, 00:08:02.492 "seek_data": false, 00:08:02.492 "copy": false, 00:08:02.492 "nvme_iov_md": false 00:08:02.492 }, 00:08:02.492 "memory_domains": [ 00:08:02.492 { 00:08:02.492 "dma_device_id": "system", 00:08:02.492 "dma_device_type": 1 00:08:02.492 }, 00:08:02.492 { 00:08:02.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.492 "dma_device_type": 2 00:08:02.492 }, 00:08:02.492 { 00:08:02.492 "dma_device_id": "system", 00:08:02.492 "dma_device_type": 1 00:08:02.492 }, 00:08:02.492 { 00:08:02.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.492 "dma_device_type": 2 00:08:02.492 } 00:08:02.492 ], 00:08:02.492 "driver_specific": { 00:08:02.492 "raid": { 00:08:02.492 "uuid": "66548137-7f3e-42b1-9eb0-7edfa0d7482f", 00:08:02.492 "strip_size_kb": 0, 00:08:02.492 "state": "online", 00:08:02.492 "raid_level": "raid1", 00:08:02.492 "superblock": true, 00:08:02.492 "num_base_bdevs": 2, 00:08:02.492 "num_base_bdevs_discovered": 2, 00:08:02.492 "num_base_bdevs_operational": 2, 00:08:02.492 "base_bdevs_list": [ 00:08:02.492 { 00:08:02.492 "name": "pt1", 00:08:02.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.492 "is_configured": true, 00:08:02.492 "data_offset": 2048, 00:08:02.492 "data_size": 63488 00:08:02.492 }, 00:08:02.492 { 00:08:02.492 "name": "pt2", 00:08:02.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.492 "is_configured": true, 00:08:02.492 "data_offset": 2048, 00:08:02.492 "data_size": 63488 00:08:02.492 } 00:08:02.492 ] 00:08:02.492 } 00:08:02.492 } 00:08:02.492 }' 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:02.492 pt2' 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.492 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.493 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.493 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.493 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.493 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.493 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.493 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:02.493 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.493 [2024-12-07 16:34:01.378647] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=66548137-7f3e-42b1-9eb0-7edfa0d7482f 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 66548137-7f3e-42b1-9eb0-7edfa0d7482f ']' 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.753 [2024-12-07 16:34:01.426331] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.753 [2024-12-07 16:34:01.426371] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.753 [2024-12-07 16:34:01.426446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.753 [2024-12-07 16:34:01.426535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.753 [2024-12-07 16:34:01.426553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.753 [2024-12-07 16:34:01.554140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:02.753 [2024-12-07 16:34:01.556429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:02.753 [2024-12-07 16:34:01.556547] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:02.753 [2024-12-07 16:34:01.556632] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:02.753 [2024-12-07 16:34:01.556684] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.753 [2024-12-07 16:34:01.556711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:02.753 request: 00:08:02.753 { 00:08:02.753 "name": "raid_bdev1", 00:08:02.753 "raid_level": "raid1", 00:08:02.753 "base_bdevs": [ 00:08:02.753 "malloc1", 00:08:02.753 "malloc2" 00:08:02.753 ], 00:08:02.753 "superblock": false, 00:08:02.753 "method": "bdev_raid_create", 00:08:02.753 "req_id": 1 00:08:02.753 } 00:08:02.753 Got JSON-RPC error response 00:08:02.753 response: 00:08:02.753 { 00:08:02.753 "code": -17, 00:08:02.753 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:02.753 } 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.753 [2024-12-07 16:34:01.622005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:02.753 [2024-12-07 16:34:01.622086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.753 [2024-12-07 16:34:01.622122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:02.753 [2024-12-07 16:34:01.622147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.753 [2024-12-07 16:34:01.624612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.753 [2024-12-07 16:34:01.624673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:02.753 [2024-12-07 16:34:01.624762] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:02.753 [2024-12-07 16:34:01.624817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:02.753 pt1 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.753 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.013 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.013 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.013 "name": "raid_bdev1", 00:08:03.013 "uuid": "66548137-7f3e-42b1-9eb0-7edfa0d7482f", 00:08:03.013 "strip_size_kb": 0, 00:08:03.013 "state": "configuring", 00:08:03.013 "raid_level": "raid1", 00:08:03.013 "superblock": true, 00:08:03.013 "num_base_bdevs": 2, 00:08:03.013 "num_base_bdevs_discovered": 1, 00:08:03.013 "num_base_bdevs_operational": 2, 00:08:03.013 "base_bdevs_list": [ 00:08:03.013 { 00:08:03.013 "name": "pt1", 00:08:03.013 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.013 "is_configured": true, 00:08:03.013 "data_offset": 2048, 00:08:03.013 "data_size": 63488 00:08:03.013 }, 00:08:03.013 { 00:08:03.013 "name": null, 00:08:03.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.013 "is_configured": false, 00:08:03.013 "data_offset": 2048, 00:08:03.013 "data_size": 63488 00:08:03.013 } 00:08:03.013 ] 00:08:03.013 }' 00:08:03.013 16:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.013 16:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.274 [2024-12-07 16:34:02.077279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:03.274 [2024-12-07 16:34:02.077380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.274 [2024-12-07 16:34:02.077422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:03.274 [2024-12-07 16:34:02.077432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.274 [2024-12-07 16:34:02.077929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.274 [2024-12-07 16:34:02.077953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:03.274 [2024-12-07 16:34:02.078044] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:03.274 [2024-12-07 16:34:02.078069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:03.274 [2024-12-07 16:34:02.078171] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:03.274 [2024-12-07 16:34:02.078180] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:03.274 [2024-12-07 16:34:02.078443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:03.274 [2024-12-07 16:34:02.078566] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:03.274 [2024-12-07 16:34:02.078588] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:03.274 [2024-12-07 16:34:02.078698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.274 pt2 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.274 "name": "raid_bdev1", 00:08:03.274 "uuid": "66548137-7f3e-42b1-9eb0-7edfa0d7482f", 00:08:03.274 "strip_size_kb": 0, 00:08:03.274 "state": "online", 00:08:03.274 "raid_level": "raid1", 00:08:03.274 "superblock": true, 00:08:03.274 "num_base_bdevs": 2, 00:08:03.274 "num_base_bdevs_discovered": 2, 00:08:03.274 "num_base_bdevs_operational": 2, 00:08:03.274 "base_bdevs_list": [ 00:08:03.274 { 00:08:03.274 "name": "pt1", 00:08:03.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.274 "is_configured": true, 00:08:03.274 "data_offset": 2048, 00:08:03.274 "data_size": 63488 00:08:03.274 }, 00:08:03.274 { 00:08:03.274 "name": "pt2", 00:08:03.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.274 "is_configured": true, 00:08:03.274 "data_offset": 2048, 00:08:03.274 "data_size": 63488 00:08:03.274 } 00:08:03.274 ] 00:08:03.274 }' 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.274 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.842 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:03.842 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:03.842 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.842 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.842 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.842 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.842 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:03.842 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.842 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.842 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.842 [2024-12-07 16:34:02.548711] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.842 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.842 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.842 "name": "raid_bdev1", 00:08:03.842 "aliases": [ 00:08:03.842 "66548137-7f3e-42b1-9eb0-7edfa0d7482f" 00:08:03.842 ], 00:08:03.842 "product_name": "Raid Volume", 00:08:03.842 "block_size": 512, 00:08:03.842 "num_blocks": 63488, 00:08:03.842 "uuid": "66548137-7f3e-42b1-9eb0-7edfa0d7482f", 00:08:03.842 "assigned_rate_limits": { 00:08:03.842 "rw_ios_per_sec": 0, 00:08:03.842 "rw_mbytes_per_sec": 0, 00:08:03.842 "r_mbytes_per_sec": 0, 00:08:03.842 "w_mbytes_per_sec": 0 00:08:03.842 }, 00:08:03.842 "claimed": false, 00:08:03.842 "zoned": false, 00:08:03.842 "supported_io_types": { 00:08:03.842 "read": true, 00:08:03.842 "write": true, 00:08:03.842 "unmap": false, 00:08:03.842 "flush": false, 00:08:03.842 "reset": true, 00:08:03.842 "nvme_admin": false, 00:08:03.842 "nvme_io": false, 00:08:03.842 "nvme_io_md": false, 00:08:03.842 "write_zeroes": true, 00:08:03.842 "zcopy": false, 00:08:03.842 "get_zone_info": false, 00:08:03.842 "zone_management": false, 00:08:03.842 "zone_append": false, 00:08:03.842 "compare": false, 00:08:03.842 "compare_and_write": false, 00:08:03.842 "abort": false, 00:08:03.842 "seek_hole": false, 00:08:03.842 "seek_data": false, 00:08:03.842 "copy": false, 00:08:03.842 "nvme_iov_md": false 00:08:03.842 }, 00:08:03.842 "memory_domains": [ 00:08:03.842 { 00:08:03.842 "dma_device_id": "system", 00:08:03.842 "dma_device_type": 1 00:08:03.842 }, 00:08:03.842 { 00:08:03.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.842 "dma_device_type": 2 00:08:03.842 }, 00:08:03.842 { 00:08:03.842 "dma_device_id": "system", 00:08:03.842 "dma_device_type": 1 00:08:03.842 }, 00:08:03.842 { 00:08:03.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.842 "dma_device_type": 2 00:08:03.842 } 00:08:03.842 ], 00:08:03.842 "driver_specific": { 00:08:03.842 "raid": { 00:08:03.842 "uuid": "66548137-7f3e-42b1-9eb0-7edfa0d7482f", 00:08:03.842 "strip_size_kb": 0, 00:08:03.842 "state": "online", 00:08:03.842 "raid_level": "raid1", 00:08:03.842 "superblock": true, 00:08:03.842 "num_base_bdevs": 2, 00:08:03.842 "num_base_bdevs_discovered": 2, 00:08:03.842 "num_base_bdevs_operational": 2, 00:08:03.842 "base_bdevs_list": [ 00:08:03.842 { 00:08:03.842 "name": "pt1", 00:08:03.842 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.842 "is_configured": true, 00:08:03.842 "data_offset": 2048, 00:08:03.842 "data_size": 63488 00:08:03.842 }, 00:08:03.842 { 00:08:03.842 "name": "pt2", 00:08:03.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.842 "is_configured": true, 00:08:03.842 "data_offset": 2048, 00:08:03.842 "data_size": 63488 00:08:03.843 } 00:08:03.843 ] 00:08:03.843 } 00:08:03.843 } 00:08:03.843 }' 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:03.843 pt2' 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.843 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.102 [2024-12-07 16:34:02.796249] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 66548137-7f3e-42b1-9eb0-7edfa0d7482f '!=' 66548137-7f3e-42b1-9eb0-7edfa0d7482f ']' 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.102 [2024-12-07 16:34:02.843969] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.102 "name": "raid_bdev1", 00:08:04.102 "uuid": "66548137-7f3e-42b1-9eb0-7edfa0d7482f", 00:08:04.102 "strip_size_kb": 0, 00:08:04.102 "state": "online", 00:08:04.102 "raid_level": "raid1", 00:08:04.102 "superblock": true, 00:08:04.102 "num_base_bdevs": 2, 00:08:04.102 "num_base_bdevs_discovered": 1, 00:08:04.102 "num_base_bdevs_operational": 1, 00:08:04.102 "base_bdevs_list": [ 00:08:04.102 { 00:08:04.102 "name": null, 00:08:04.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.102 "is_configured": false, 00:08:04.102 "data_offset": 0, 00:08:04.102 "data_size": 63488 00:08:04.102 }, 00:08:04.102 { 00:08:04.102 "name": "pt2", 00:08:04.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.102 "is_configured": true, 00:08:04.102 "data_offset": 2048, 00:08:04.102 "data_size": 63488 00:08:04.102 } 00:08:04.102 ] 00:08:04.102 }' 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.102 16:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.671 [2024-12-07 16:34:03.271211] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.671 [2024-12-07 16:34:03.271253] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.671 [2024-12-07 16:34:03.271372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.671 [2024-12-07 16:34:03.271432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.671 [2024-12-07 16:34:03.271443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.671 [2024-12-07 16:34:03.343071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:04.671 [2024-12-07 16:34:03.343127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.671 [2024-12-07 16:34:03.343157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:04.671 [2024-12-07 16:34:03.343166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.671 [2024-12-07 16:34:03.345638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.671 [2024-12-07 16:34:03.345672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:04.671 [2024-12-07 16:34:03.345754] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:04.671 [2024-12-07 16:34:03.345788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:04.671 [2024-12-07 16:34:03.345870] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:04.671 [2024-12-07 16:34:03.345878] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:04.671 [2024-12-07 16:34:03.346095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:04.671 [2024-12-07 16:34:03.346223] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:04.671 [2024-12-07 16:34:03.346236] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:04.671 [2024-12-07 16:34:03.346338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.671 pt2 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.671 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.671 "name": "raid_bdev1", 00:08:04.671 "uuid": "66548137-7f3e-42b1-9eb0-7edfa0d7482f", 00:08:04.672 "strip_size_kb": 0, 00:08:04.672 "state": "online", 00:08:04.672 "raid_level": "raid1", 00:08:04.672 "superblock": true, 00:08:04.672 "num_base_bdevs": 2, 00:08:04.672 "num_base_bdevs_discovered": 1, 00:08:04.672 "num_base_bdevs_operational": 1, 00:08:04.672 "base_bdevs_list": [ 00:08:04.672 { 00:08:04.672 "name": null, 00:08:04.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.672 "is_configured": false, 00:08:04.672 "data_offset": 2048, 00:08:04.672 "data_size": 63488 00:08:04.672 }, 00:08:04.672 { 00:08:04.672 "name": "pt2", 00:08:04.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.672 "is_configured": true, 00:08:04.672 "data_offset": 2048, 00:08:04.672 "data_size": 63488 00:08:04.672 } 00:08:04.672 ] 00:08:04.672 }' 00:08:04.672 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.672 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.930 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.930 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.930 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.930 [2024-12-07 16:34:03.810478] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.930 [2024-12-07 16:34:03.810583] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.930 [2024-12-07 16:34:03.810694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.930 [2024-12-07 16:34:03.810764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.931 [2024-12-07 16:34:03.810808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:04.931 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.931 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.931 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.931 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.931 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:04.931 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.191 [2024-12-07 16:34:03.874276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:05.191 [2024-12-07 16:34:03.874391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.191 [2024-12-07 16:34:03.874435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:05.191 [2024-12-07 16:34:03.874475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.191 [2024-12-07 16:34:03.877009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.191 [2024-12-07 16:34:03.877082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:05.191 [2024-12-07 16:34:03.877181] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:05.191 [2024-12-07 16:34:03.877245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:05.191 [2024-12-07 16:34:03.877403] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:05.191 [2024-12-07 16:34:03.877473] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:05.191 [2024-12-07 16:34:03.877515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:08:05.191 [2024-12-07 16:34:03.877594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:05.191 [2024-12-07 16:34:03.877703] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:05.191 [2024-12-07 16:34:03.877745] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:05.191 [2024-12-07 16:34:03.877998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:05.191 [2024-12-07 16:34:03.878155] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:05.191 [2024-12-07 16:34:03.878194] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:05.191 [2024-12-07 16:34:03.878385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.191 pt1 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.191 "name": "raid_bdev1", 00:08:05.191 "uuid": "66548137-7f3e-42b1-9eb0-7edfa0d7482f", 00:08:05.191 "strip_size_kb": 0, 00:08:05.191 "state": "online", 00:08:05.191 "raid_level": "raid1", 00:08:05.191 "superblock": true, 00:08:05.191 "num_base_bdevs": 2, 00:08:05.191 "num_base_bdevs_discovered": 1, 00:08:05.191 "num_base_bdevs_operational": 1, 00:08:05.191 "base_bdevs_list": [ 00:08:05.191 { 00:08:05.191 "name": null, 00:08:05.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.191 "is_configured": false, 00:08:05.191 "data_offset": 2048, 00:08:05.191 "data_size": 63488 00:08:05.191 }, 00:08:05.191 { 00:08:05.191 "name": "pt2", 00:08:05.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.191 "is_configured": true, 00:08:05.191 "data_offset": 2048, 00:08:05.191 "data_size": 63488 00:08:05.191 } 00:08:05.191 ] 00:08:05.191 }' 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.191 16:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.451 16:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:05.451 16:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:05.451 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.451 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.451 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.711 16:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:05.711 16:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.712 [2024-12-07 16:34:04.369763] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 66548137-7f3e-42b1-9eb0-7edfa0d7482f '!=' 66548137-7f3e-42b1-9eb0-7edfa0d7482f ']' 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74715 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74715 ']' 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74715 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74715 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:05.712 killing process with pid 74715 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74715' 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74715 00:08:05.712 [2024-12-07 16:34:04.457105] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.712 [2024-12-07 16:34:04.457215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.712 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74715 00:08:05.712 [2024-12-07 16:34:04.457274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.712 [2024-12-07 16:34:04.457284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:05.712 [2024-12-07 16:34:04.499456] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.282 ************************************ 00:08:06.282 END TEST raid_superblock_test 00:08:06.282 16:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:06.282 00:08:06.282 real 0m5.177s 00:08:06.282 user 0m8.233s 00:08:06.282 sys 0m1.166s 00:08:06.282 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.282 16:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.282 ************************************ 00:08:06.282 16:34:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:06.282 16:34:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:06.282 16:34:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.282 16:34:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.282 ************************************ 00:08:06.282 START TEST raid_read_error_test 00:08:06.282 ************************************ 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dRija5M2pG 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75033 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75033 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75033 ']' 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:06.282 16:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.282 [2024-12-07 16:34:05.049606] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:06.282 [2024-12-07 16:34:05.049735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75033 ] 00:08:06.547 [2024-12-07 16:34:05.209311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.547 [2024-12-07 16:34:05.279829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.547 [2024-12-07 16:34:05.356619] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.547 [2024-12-07 16:34:05.356664] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.121 BaseBdev1_malloc 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.121 true 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.121 [2024-12-07 16:34:05.931501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:07.121 [2024-12-07 16:34:05.931632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.121 [2024-12-07 16:34:05.931673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:07.121 [2024-12-07 16:34:05.931685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.121 [2024-12-07 16:34:05.934019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.121 [2024-12-07 16:34:05.934053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:07.121 BaseBdev1 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.121 BaseBdev2_malloc 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.121 true 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.121 [2024-12-07 16:34:05.987199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:07.121 [2024-12-07 16:34:05.987255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.121 [2024-12-07 16:34:05.987276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:07.121 [2024-12-07 16:34:05.987285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.121 [2024-12-07 16:34:05.989594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.121 [2024-12-07 16:34:05.989695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:07.121 BaseBdev2 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.121 16:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.121 [2024-12-07 16:34:05.999211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.121 [2024-12-07 16:34:06.001362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.121 [2024-12-07 16:34:06.001560] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:07.121 [2024-12-07 16:34:06.001573] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:07.121 [2024-12-07 16:34:06.001850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:07.121 [2024-12-07 16:34:06.002006] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:07.121 [2024-12-07 16:34:06.002020] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:07.121 [2024-12-07 16:34:06.002168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.121 16:34:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.122 16:34:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.381 16:34:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.381 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.381 "name": "raid_bdev1", 00:08:07.381 "uuid": "7fe2e060-fc8a-4008-98df-3efc820ac78b", 00:08:07.381 "strip_size_kb": 0, 00:08:07.381 "state": "online", 00:08:07.381 "raid_level": "raid1", 00:08:07.381 "superblock": true, 00:08:07.381 "num_base_bdevs": 2, 00:08:07.381 "num_base_bdevs_discovered": 2, 00:08:07.381 "num_base_bdevs_operational": 2, 00:08:07.381 "base_bdevs_list": [ 00:08:07.381 { 00:08:07.381 "name": "BaseBdev1", 00:08:07.381 "uuid": "584614a4-b4cf-5336-a0f5-736011a708c1", 00:08:07.381 "is_configured": true, 00:08:07.381 "data_offset": 2048, 00:08:07.381 "data_size": 63488 00:08:07.381 }, 00:08:07.381 { 00:08:07.381 "name": "BaseBdev2", 00:08:07.381 "uuid": "cf45f2f8-3994-58ac-b14e-4b08967f165a", 00:08:07.381 "is_configured": true, 00:08:07.381 "data_offset": 2048, 00:08:07.381 "data_size": 63488 00:08:07.381 } 00:08:07.381 ] 00:08:07.381 }' 00:08:07.381 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.381 16:34:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.641 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:07.641 16:34:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:07.901 [2024-12-07 16:34:06.566721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.840 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.841 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.841 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.841 "name": "raid_bdev1", 00:08:08.841 "uuid": "7fe2e060-fc8a-4008-98df-3efc820ac78b", 00:08:08.841 "strip_size_kb": 0, 00:08:08.841 "state": "online", 00:08:08.841 "raid_level": "raid1", 00:08:08.841 "superblock": true, 00:08:08.841 "num_base_bdevs": 2, 00:08:08.841 "num_base_bdevs_discovered": 2, 00:08:08.841 "num_base_bdevs_operational": 2, 00:08:08.841 "base_bdevs_list": [ 00:08:08.841 { 00:08:08.841 "name": "BaseBdev1", 00:08:08.841 "uuid": "584614a4-b4cf-5336-a0f5-736011a708c1", 00:08:08.841 "is_configured": true, 00:08:08.841 "data_offset": 2048, 00:08:08.841 "data_size": 63488 00:08:08.841 }, 00:08:08.841 { 00:08:08.841 "name": "BaseBdev2", 00:08:08.841 "uuid": "cf45f2f8-3994-58ac-b14e-4b08967f165a", 00:08:08.841 "is_configured": true, 00:08:08.841 "data_offset": 2048, 00:08:08.841 "data_size": 63488 00:08:08.841 } 00:08:08.841 ] 00:08:08.841 }' 00:08:08.841 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.841 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.100 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.100 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.100 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.100 [2024-12-07 16:34:07.964180] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.100 [2024-12-07 16:34:07.964226] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.100 [2024-12-07 16:34:07.966540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.100 { 00:08:09.100 "results": [ 00:08:09.100 { 00:08:09.100 "job": "raid_bdev1", 00:08:09.100 "core_mask": "0x1", 00:08:09.100 "workload": "randrw", 00:08:09.100 "percentage": 50, 00:08:09.101 "status": "finished", 00:08:09.101 "queue_depth": 1, 00:08:09.101 "io_size": 131072, 00:08:09.101 "runtime": 1.398071, 00:08:09.101 "iops": 16196.602318480249, 00:08:09.101 "mibps": 2024.575289810031, 00:08:09.101 "io_failed": 0, 00:08:09.101 "io_timeout": 0, 00:08:09.101 "avg_latency_us": 59.32825113837186, 00:08:09.101 "min_latency_us": 21.799126637554586, 00:08:09.101 "max_latency_us": 1523.926637554585 00:08:09.101 } 00:08:09.101 ], 00:08:09.101 "core_count": 1 00:08:09.101 } 00:08:09.101 [2024-12-07 16:34:07.966664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.101 [2024-12-07 16:34:07.966759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.101 [2024-12-07 16:34:07.966769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:09.101 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.101 16:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75033 00:08:09.101 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75033 ']' 00:08:09.101 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75033 00:08:09.101 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:09.101 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.101 16:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75033 00:08:09.361 killing process with pid 75033 00:08:09.361 16:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.361 16:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.361 16:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75033' 00:08:09.361 16:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75033 00:08:09.361 [2024-12-07 16:34:08.018975] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.361 16:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75033 00:08:09.361 [2024-12-07 16:34:08.049279] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.621 16:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dRija5M2pG 00:08:09.621 16:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:09.621 16:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:09.621 ************************************ 00:08:09.621 END TEST raid_read_error_test 00:08:09.621 ************************************ 00:08:09.621 16:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:09.621 16:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:09.621 16:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.621 16:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:09.621 16:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:09.621 00:08:09.621 real 0m3.477s 00:08:09.621 user 0m4.308s 00:08:09.621 sys 0m0.619s 00:08:09.621 16:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.621 16:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.621 16:34:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:09.621 16:34:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:09.621 16:34:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.621 16:34:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.621 ************************************ 00:08:09.621 START TEST raid_write_error_test 00:08:09.621 ************************************ 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:09.621 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0HXWBL1z5X 00:08:09.881 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75163 00:08:09.881 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:09.881 16:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75163 00:08:09.881 16:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75163 ']' 00:08:09.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.881 16:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.881 16:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.881 16:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.881 16:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.881 16:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.881 [2024-12-07 16:34:08.599691] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:09.881 [2024-12-07 16:34:08.599797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75163 ] 00:08:09.881 [2024-12-07 16:34:08.760487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.141 [2024-12-07 16:34:08.830803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.141 [2024-12-07 16:34:08.907195] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.141 [2024-12-07 16:34:08.907240] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.712 BaseBdev1_malloc 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.712 true 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.712 [2024-12-07 16:34:09.505530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:10.712 [2024-12-07 16:34:09.505600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.712 [2024-12-07 16:34:09.505623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:10.712 [2024-12-07 16:34:09.505631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.712 [2024-12-07 16:34:09.508088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.712 [2024-12-07 16:34:09.508131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:10.712 BaseBdev1 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.712 BaseBdev2_malloc 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.712 true 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.712 [2024-12-07 16:34:09.560189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:10.712 [2024-12-07 16:34:09.560240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.712 [2024-12-07 16:34:09.560259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:10.712 [2024-12-07 16:34:09.560269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.712 [2024-12-07 16:34:09.562541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.712 [2024-12-07 16:34:09.562654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:10.712 BaseBdev2 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.712 [2024-12-07 16:34:09.572208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.712 [2024-12-07 16:34:09.574360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.712 [2024-12-07 16:34:09.574532] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:10.712 [2024-12-07 16:34:09.574550] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:10.712 [2024-12-07 16:34:09.574821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:10.712 [2024-12-07 16:34:09.574965] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:10.712 [2024-12-07 16:34:09.574984] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:10.712 [2024-12-07 16:34:09.575115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.712 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.972 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.972 "name": "raid_bdev1", 00:08:10.972 "uuid": "7f5386cf-8616-4062-9778-45a08d687cdf", 00:08:10.972 "strip_size_kb": 0, 00:08:10.972 "state": "online", 00:08:10.972 "raid_level": "raid1", 00:08:10.972 "superblock": true, 00:08:10.972 "num_base_bdevs": 2, 00:08:10.972 "num_base_bdevs_discovered": 2, 00:08:10.972 "num_base_bdevs_operational": 2, 00:08:10.972 "base_bdevs_list": [ 00:08:10.972 { 00:08:10.972 "name": "BaseBdev1", 00:08:10.972 "uuid": "bdc3cbe8-2ade-5559-8814-7da31e746277", 00:08:10.972 "is_configured": true, 00:08:10.972 "data_offset": 2048, 00:08:10.972 "data_size": 63488 00:08:10.972 }, 00:08:10.972 { 00:08:10.972 "name": "BaseBdev2", 00:08:10.972 "uuid": "bc08390a-0f5c-5313-b233-7e247387b8d8", 00:08:10.972 "is_configured": true, 00:08:10.972 "data_offset": 2048, 00:08:10.972 "data_size": 63488 00:08:10.972 } 00:08:10.972 ] 00:08:10.972 }' 00:08:10.972 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.972 16:34:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.232 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:11.233 16:34:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:11.233 [2024-12-07 16:34:10.067878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.173 [2024-12-07 16:34:10.986089] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:12.173 [2024-12-07 16:34:10.986251] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.173 [2024-12-07 16:34:10.986533] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:12.173 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.174 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.174 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.174 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.174 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.174 16:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.174 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.174 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.174 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.174 16:34:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.174 "name": "raid_bdev1", 00:08:12.174 "uuid": "7f5386cf-8616-4062-9778-45a08d687cdf", 00:08:12.174 "strip_size_kb": 0, 00:08:12.174 "state": "online", 00:08:12.174 "raid_level": "raid1", 00:08:12.174 "superblock": true, 00:08:12.174 "num_base_bdevs": 2, 00:08:12.174 "num_base_bdevs_discovered": 1, 00:08:12.174 "num_base_bdevs_operational": 1, 00:08:12.174 "base_bdevs_list": [ 00:08:12.174 { 00:08:12.174 "name": null, 00:08:12.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.174 "is_configured": false, 00:08:12.174 "data_offset": 0, 00:08:12.174 "data_size": 63488 00:08:12.174 }, 00:08:12.174 { 00:08:12.174 "name": "BaseBdev2", 00:08:12.174 "uuid": "bc08390a-0f5c-5313-b233-7e247387b8d8", 00:08:12.174 "is_configured": true, 00:08:12.174 "data_offset": 2048, 00:08:12.174 "data_size": 63488 00:08:12.174 } 00:08:12.174 ] 00:08:12.174 }' 00:08:12.174 16:34:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.174 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.744 [2024-12-07 16:34:11.471024] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.744 [2024-12-07 16:34:11.471073] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.744 [2024-12-07 16:34:11.473598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.744 [2024-12-07 16:34:11.473655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.744 [2024-12-07 16:34:11.473712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.744 [2024-12-07 16:34:11.473725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:12.744 { 00:08:12.744 "results": [ 00:08:12.744 { 00:08:12.744 "job": "raid_bdev1", 00:08:12.744 "core_mask": "0x1", 00:08:12.744 "workload": "randrw", 00:08:12.744 "percentage": 50, 00:08:12.744 "status": "finished", 00:08:12.744 "queue_depth": 1, 00:08:12.744 "io_size": 131072, 00:08:12.744 "runtime": 1.403741, 00:08:12.744 "iops": 20337.797357204785, 00:08:12.744 "mibps": 2542.224669650598, 00:08:12.744 "io_failed": 0, 00:08:12.744 "io_timeout": 0, 00:08:12.744 "avg_latency_us": 46.70478804464125, 00:08:12.744 "min_latency_us": 21.240174672489083, 00:08:12.744 "max_latency_us": 1423.7624454148472 00:08:12.744 } 00:08:12.744 ], 00:08:12.744 "core_count": 1 00:08:12.744 } 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75163 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75163 ']' 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75163 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75163 00:08:12.744 killing process with pid 75163 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75163' 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75163 00:08:12.744 [2024-12-07 16:34:11.517999] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.744 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75163 00:08:12.744 [2024-12-07 16:34:11.545712] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.315 16:34:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0HXWBL1z5X 00:08:13.315 16:34:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:13.315 16:34:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:13.315 16:34:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:13.315 16:34:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:13.315 16:34:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.315 16:34:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:13.315 16:34:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:13.315 00:08:13.315 real 0m3.428s 00:08:13.315 user 0m4.245s 00:08:13.315 sys 0m0.604s 00:08:13.315 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.315 ************************************ 00:08:13.315 END TEST raid_write_error_test 00:08:13.315 ************************************ 00:08:13.315 16:34:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.315 16:34:11 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:13.315 16:34:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:13.315 16:34:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:13.315 16:34:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:13.315 16:34:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.315 16:34:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.315 ************************************ 00:08:13.315 START TEST raid_state_function_test 00:08:13.315 ************************************ 00:08:13.315 16:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:13.315 16:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:13.315 16:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:13.315 16:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:13.315 16:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:13.315 16:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75296 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:13.315 Process raid pid: 75296 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75296' 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75296 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75296 ']' 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.315 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.315 [2024-12-07 16:34:12.096862] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:13.315 [2024-12-07 16:34:12.097100] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.576 [2024-12-07 16:34:12.260504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.576 [2024-12-07 16:34:12.331113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.576 [2024-12-07 16:34:12.408215] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.576 [2024-12-07 16:34:12.408381] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.147 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.147 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:14.147 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.148 [2024-12-07 16:34:12.936791] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.148 [2024-12-07 16:34:12.936860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.148 [2024-12-07 16:34:12.936884] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.148 [2024-12-07 16:34:12.936895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.148 [2024-12-07 16:34:12.936901] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:14.148 [2024-12-07 16:34:12.936913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.148 "name": "Existed_Raid", 00:08:14.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.148 "strip_size_kb": 64, 00:08:14.148 "state": "configuring", 00:08:14.148 "raid_level": "raid0", 00:08:14.148 "superblock": false, 00:08:14.148 "num_base_bdevs": 3, 00:08:14.148 "num_base_bdevs_discovered": 0, 00:08:14.148 "num_base_bdevs_operational": 3, 00:08:14.148 "base_bdevs_list": [ 00:08:14.148 { 00:08:14.148 "name": "BaseBdev1", 00:08:14.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.148 "is_configured": false, 00:08:14.148 "data_offset": 0, 00:08:14.148 "data_size": 0 00:08:14.148 }, 00:08:14.148 { 00:08:14.148 "name": "BaseBdev2", 00:08:14.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.148 "is_configured": false, 00:08:14.148 "data_offset": 0, 00:08:14.148 "data_size": 0 00:08:14.148 }, 00:08:14.148 { 00:08:14.148 "name": "BaseBdev3", 00:08:14.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.148 "is_configured": false, 00:08:14.148 "data_offset": 0, 00:08:14.148 "data_size": 0 00:08:14.148 } 00:08:14.148 ] 00:08:14.148 }' 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.148 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 [2024-12-07 16:34:13.395932] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.719 [2024-12-07 16:34:13.396051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 [2024-12-07 16:34:13.407919] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.719 [2024-12-07 16:34:13.407999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.719 [2024-12-07 16:34:13.408025] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.719 [2024-12-07 16:34:13.408047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.719 [2024-12-07 16:34:13.408065] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:14.719 [2024-12-07 16:34:13.408085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 [2024-12-07 16:34:13.434708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.719 BaseBdev1 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 [ 00:08:14.719 { 00:08:14.719 "name": "BaseBdev1", 00:08:14.719 "aliases": [ 00:08:14.719 "e26f59c3-a993-4826-b030-2bd323beeb90" 00:08:14.719 ], 00:08:14.719 "product_name": "Malloc disk", 00:08:14.719 "block_size": 512, 00:08:14.719 "num_blocks": 65536, 00:08:14.719 "uuid": "e26f59c3-a993-4826-b030-2bd323beeb90", 00:08:14.719 "assigned_rate_limits": { 00:08:14.719 "rw_ios_per_sec": 0, 00:08:14.719 "rw_mbytes_per_sec": 0, 00:08:14.719 "r_mbytes_per_sec": 0, 00:08:14.719 "w_mbytes_per_sec": 0 00:08:14.719 }, 00:08:14.719 "claimed": true, 00:08:14.719 "claim_type": "exclusive_write", 00:08:14.719 "zoned": false, 00:08:14.719 "supported_io_types": { 00:08:14.719 "read": true, 00:08:14.719 "write": true, 00:08:14.719 "unmap": true, 00:08:14.719 "flush": true, 00:08:14.719 "reset": true, 00:08:14.719 "nvme_admin": false, 00:08:14.719 "nvme_io": false, 00:08:14.719 "nvme_io_md": false, 00:08:14.719 "write_zeroes": true, 00:08:14.719 "zcopy": true, 00:08:14.719 "get_zone_info": false, 00:08:14.719 "zone_management": false, 00:08:14.719 "zone_append": false, 00:08:14.719 "compare": false, 00:08:14.719 "compare_and_write": false, 00:08:14.719 "abort": true, 00:08:14.719 "seek_hole": false, 00:08:14.719 "seek_data": false, 00:08:14.719 "copy": true, 00:08:14.719 "nvme_iov_md": false 00:08:14.719 }, 00:08:14.719 "memory_domains": [ 00:08:14.719 { 00:08:14.719 "dma_device_id": "system", 00:08:14.719 "dma_device_type": 1 00:08:14.719 }, 00:08:14.719 { 00:08:14.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.719 "dma_device_type": 2 00:08:14.719 } 00:08:14.719 ], 00:08:14.719 "driver_specific": {} 00:08:14.719 } 00:08:14.719 ] 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.719 "name": "Existed_Raid", 00:08:14.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.719 "strip_size_kb": 64, 00:08:14.719 "state": "configuring", 00:08:14.719 "raid_level": "raid0", 00:08:14.719 "superblock": false, 00:08:14.719 "num_base_bdevs": 3, 00:08:14.719 "num_base_bdevs_discovered": 1, 00:08:14.719 "num_base_bdevs_operational": 3, 00:08:14.719 "base_bdevs_list": [ 00:08:14.719 { 00:08:14.719 "name": "BaseBdev1", 00:08:14.719 "uuid": "e26f59c3-a993-4826-b030-2bd323beeb90", 00:08:14.719 "is_configured": true, 00:08:14.719 "data_offset": 0, 00:08:14.719 "data_size": 65536 00:08:14.719 }, 00:08:14.719 { 00:08:14.719 "name": "BaseBdev2", 00:08:14.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.719 "is_configured": false, 00:08:14.719 "data_offset": 0, 00:08:14.719 "data_size": 0 00:08:14.719 }, 00:08:14.719 { 00:08:14.719 "name": "BaseBdev3", 00:08:14.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.719 "is_configured": false, 00:08:14.719 "data_offset": 0, 00:08:14.719 "data_size": 0 00:08:14.719 } 00:08:14.719 ] 00:08:14.719 }' 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.719 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.290 [2024-12-07 16:34:13.913944] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:15.290 [2024-12-07 16:34:13.914097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.290 [2024-12-07 16:34:13.925936] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.290 [2024-12-07 16:34:13.928055] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.290 [2024-12-07 16:34:13.928129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.290 [2024-12-07 16:34:13.928157] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:15.290 [2024-12-07 16:34:13.928179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.290 "name": "Existed_Raid", 00:08:15.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.290 "strip_size_kb": 64, 00:08:15.290 "state": "configuring", 00:08:15.290 "raid_level": "raid0", 00:08:15.290 "superblock": false, 00:08:15.290 "num_base_bdevs": 3, 00:08:15.290 "num_base_bdevs_discovered": 1, 00:08:15.290 "num_base_bdevs_operational": 3, 00:08:15.290 "base_bdevs_list": [ 00:08:15.290 { 00:08:15.290 "name": "BaseBdev1", 00:08:15.290 "uuid": "e26f59c3-a993-4826-b030-2bd323beeb90", 00:08:15.290 "is_configured": true, 00:08:15.290 "data_offset": 0, 00:08:15.290 "data_size": 65536 00:08:15.290 }, 00:08:15.290 { 00:08:15.290 "name": "BaseBdev2", 00:08:15.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.290 "is_configured": false, 00:08:15.290 "data_offset": 0, 00:08:15.290 "data_size": 0 00:08:15.290 }, 00:08:15.290 { 00:08:15.290 "name": "BaseBdev3", 00:08:15.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.290 "is_configured": false, 00:08:15.290 "data_offset": 0, 00:08:15.290 "data_size": 0 00:08:15.290 } 00:08:15.290 ] 00:08:15.290 }' 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.290 16:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.550 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:15.550 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.550 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.550 [2024-12-07 16:34:14.317565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.550 BaseBdev2 00:08:15.550 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.551 [ 00:08:15.551 { 00:08:15.551 "name": "BaseBdev2", 00:08:15.551 "aliases": [ 00:08:15.551 "426d83c9-f34d-4736-af43-857e108e75e8" 00:08:15.551 ], 00:08:15.551 "product_name": "Malloc disk", 00:08:15.551 "block_size": 512, 00:08:15.551 "num_blocks": 65536, 00:08:15.551 "uuid": "426d83c9-f34d-4736-af43-857e108e75e8", 00:08:15.551 "assigned_rate_limits": { 00:08:15.551 "rw_ios_per_sec": 0, 00:08:15.551 "rw_mbytes_per_sec": 0, 00:08:15.551 "r_mbytes_per_sec": 0, 00:08:15.551 "w_mbytes_per_sec": 0 00:08:15.551 }, 00:08:15.551 "claimed": true, 00:08:15.551 "claim_type": "exclusive_write", 00:08:15.551 "zoned": false, 00:08:15.551 "supported_io_types": { 00:08:15.551 "read": true, 00:08:15.551 "write": true, 00:08:15.551 "unmap": true, 00:08:15.551 "flush": true, 00:08:15.551 "reset": true, 00:08:15.551 "nvme_admin": false, 00:08:15.551 "nvme_io": false, 00:08:15.551 "nvme_io_md": false, 00:08:15.551 "write_zeroes": true, 00:08:15.551 "zcopy": true, 00:08:15.551 "get_zone_info": false, 00:08:15.551 "zone_management": false, 00:08:15.551 "zone_append": false, 00:08:15.551 "compare": false, 00:08:15.551 "compare_and_write": false, 00:08:15.551 "abort": true, 00:08:15.551 "seek_hole": false, 00:08:15.551 "seek_data": false, 00:08:15.551 "copy": true, 00:08:15.551 "nvme_iov_md": false 00:08:15.551 }, 00:08:15.551 "memory_domains": [ 00:08:15.551 { 00:08:15.551 "dma_device_id": "system", 00:08:15.551 "dma_device_type": 1 00:08:15.551 }, 00:08:15.551 { 00:08:15.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.551 "dma_device_type": 2 00:08:15.551 } 00:08:15.551 ], 00:08:15.551 "driver_specific": {} 00:08:15.551 } 00:08:15.551 ] 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.551 "name": "Existed_Raid", 00:08:15.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.551 "strip_size_kb": 64, 00:08:15.551 "state": "configuring", 00:08:15.551 "raid_level": "raid0", 00:08:15.551 "superblock": false, 00:08:15.551 "num_base_bdevs": 3, 00:08:15.551 "num_base_bdevs_discovered": 2, 00:08:15.551 "num_base_bdevs_operational": 3, 00:08:15.551 "base_bdevs_list": [ 00:08:15.551 { 00:08:15.551 "name": "BaseBdev1", 00:08:15.551 "uuid": "e26f59c3-a993-4826-b030-2bd323beeb90", 00:08:15.551 "is_configured": true, 00:08:15.551 "data_offset": 0, 00:08:15.551 "data_size": 65536 00:08:15.551 }, 00:08:15.551 { 00:08:15.551 "name": "BaseBdev2", 00:08:15.551 "uuid": "426d83c9-f34d-4736-af43-857e108e75e8", 00:08:15.551 "is_configured": true, 00:08:15.551 "data_offset": 0, 00:08:15.551 "data_size": 65536 00:08:15.551 }, 00:08:15.551 { 00:08:15.551 "name": "BaseBdev3", 00:08:15.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.551 "is_configured": false, 00:08:15.551 "data_offset": 0, 00:08:15.551 "data_size": 0 00:08:15.551 } 00:08:15.551 ] 00:08:15.551 }' 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.551 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.122 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:16.122 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.122 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.122 [2024-12-07 16:34:14.785606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:16.123 [2024-12-07 16:34:14.785659] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:16.123 [2024-12-07 16:34:14.785671] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:16.123 [2024-12-07 16:34:14.785984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:16.123 [2024-12-07 16:34:14.786131] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:16.123 [2024-12-07 16:34:14.786140] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:16.123 [2024-12-07 16:34:14.786415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.123 BaseBdev3 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.123 [ 00:08:16.123 { 00:08:16.123 "name": "BaseBdev3", 00:08:16.123 "aliases": [ 00:08:16.123 "839bbee4-c31d-43bc-82aa-1381083289a8" 00:08:16.123 ], 00:08:16.123 "product_name": "Malloc disk", 00:08:16.123 "block_size": 512, 00:08:16.123 "num_blocks": 65536, 00:08:16.123 "uuid": "839bbee4-c31d-43bc-82aa-1381083289a8", 00:08:16.123 "assigned_rate_limits": { 00:08:16.123 "rw_ios_per_sec": 0, 00:08:16.123 "rw_mbytes_per_sec": 0, 00:08:16.123 "r_mbytes_per_sec": 0, 00:08:16.123 "w_mbytes_per_sec": 0 00:08:16.123 }, 00:08:16.123 "claimed": true, 00:08:16.123 "claim_type": "exclusive_write", 00:08:16.123 "zoned": false, 00:08:16.123 "supported_io_types": { 00:08:16.123 "read": true, 00:08:16.123 "write": true, 00:08:16.123 "unmap": true, 00:08:16.123 "flush": true, 00:08:16.123 "reset": true, 00:08:16.123 "nvme_admin": false, 00:08:16.123 "nvme_io": false, 00:08:16.123 "nvme_io_md": false, 00:08:16.123 "write_zeroes": true, 00:08:16.123 "zcopy": true, 00:08:16.123 "get_zone_info": false, 00:08:16.123 "zone_management": false, 00:08:16.123 "zone_append": false, 00:08:16.123 "compare": false, 00:08:16.123 "compare_and_write": false, 00:08:16.123 "abort": true, 00:08:16.123 "seek_hole": false, 00:08:16.123 "seek_data": false, 00:08:16.123 "copy": true, 00:08:16.123 "nvme_iov_md": false 00:08:16.123 }, 00:08:16.123 "memory_domains": [ 00:08:16.123 { 00:08:16.123 "dma_device_id": "system", 00:08:16.123 "dma_device_type": 1 00:08:16.123 }, 00:08:16.123 { 00:08:16.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.123 "dma_device_type": 2 00:08:16.123 } 00:08:16.123 ], 00:08:16.123 "driver_specific": {} 00:08:16.123 } 00:08:16.123 ] 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.123 "name": "Existed_Raid", 00:08:16.123 "uuid": "7312195b-bd24-483c-aa4f-040f5ad83e58", 00:08:16.123 "strip_size_kb": 64, 00:08:16.123 "state": "online", 00:08:16.123 "raid_level": "raid0", 00:08:16.123 "superblock": false, 00:08:16.123 "num_base_bdevs": 3, 00:08:16.123 "num_base_bdevs_discovered": 3, 00:08:16.123 "num_base_bdevs_operational": 3, 00:08:16.123 "base_bdevs_list": [ 00:08:16.123 { 00:08:16.123 "name": "BaseBdev1", 00:08:16.123 "uuid": "e26f59c3-a993-4826-b030-2bd323beeb90", 00:08:16.123 "is_configured": true, 00:08:16.123 "data_offset": 0, 00:08:16.123 "data_size": 65536 00:08:16.123 }, 00:08:16.123 { 00:08:16.123 "name": "BaseBdev2", 00:08:16.123 "uuid": "426d83c9-f34d-4736-af43-857e108e75e8", 00:08:16.123 "is_configured": true, 00:08:16.123 "data_offset": 0, 00:08:16.123 "data_size": 65536 00:08:16.123 }, 00:08:16.123 { 00:08:16.123 "name": "BaseBdev3", 00:08:16.123 "uuid": "839bbee4-c31d-43bc-82aa-1381083289a8", 00:08:16.123 "is_configured": true, 00:08:16.123 "data_offset": 0, 00:08:16.123 "data_size": 65536 00:08:16.123 } 00:08:16.123 ] 00:08:16.123 }' 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.123 16:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.382 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:16.382 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:16.382 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.382 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.382 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.382 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.382 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.382 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:16.659 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.659 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.659 [2024-12-07 16:34:15.285106] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.659 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.659 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.659 "name": "Existed_Raid", 00:08:16.659 "aliases": [ 00:08:16.659 "7312195b-bd24-483c-aa4f-040f5ad83e58" 00:08:16.659 ], 00:08:16.659 "product_name": "Raid Volume", 00:08:16.659 "block_size": 512, 00:08:16.659 "num_blocks": 196608, 00:08:16.659 "uuid": "7312195b-bd24-483c-aa4f-040f5ad83e58", 00:08:16.659 "assigned_rate_limits": { 00:08:16.659 "rw_ios_per_sec": 0, 00:08:16.659 "rw_mbytes_per_sec": 0, 00:08:16.659 "r_mbytes_per_sec": 0, 00:08:16.659 "w_mbytes_per_sec": 0 00:08:16.659 }, 00:08:16.659 "claimed": false, 00:08:16.659 "zoned": false, 00:08:16.659 "supported_io_types": { 00:08:16.659 "read": true, 00:08:16.659 "write": true, 00:08:16.659 "unmap": true, 00:08:16.659 "flush": true, 00:08:16.659 "reset": true, 00:08:16.659 "nvme_admin": false, 00:08:16.659 "nvme_io": false, 00:08:16.659 "nvme_io_md": false, 00:08:16.659 "write_zeroes": true, 00:08:16.659 "zcopy": false, 00:08:16.659 "get_zone_info": false, 00:08:16.659 "zone_management": false, 00:08:16.659 "zone_append": false, 00:08:16.659 "compare": false, 00:08:16.659 "compare_and_write": false, 00:08:16.659 "abort": false, 00:08:16.659 "seek_hole": false, 00:08:16.659 "seek_data": false, 00:08:16.659 "copy": false, 00:08:16.659 "nvme_iov_md": false 00:08:16.659 }, 00:08:16.659 "memory_domains": [ 00:08:16.659 { 00:08:16.659 "dma_device_id": "system", 00:08:16.659 "dma_device_type": 1 00:08:16.659 }, 00:08:16.659 { 00:08:16.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.659 "dma_device_type": 2 00:08:16.659 }, 00:08:16.659 { 00:08:16.659 "dma_device_id": "system", 00:08:16.659 "dma_device_type": 1 00:08:16.659 }, 00:08:16.659 { 00:08:16.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.659 "dma_device_type": 2 00:08:16.659 }, 00:08:16.659 { 00:08:16.659 "dma_device_id": "system", 00:08:16.659 "dma_device_type": 1 00:08:16.659 }, 00:08:16.659 { 00:08:16.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.659 "dma_device_type": 2 00:08:16.659 } 00:08:16.659 ], 00:08:16.659 "driver_specific": { 00:08:16.659 "raid": { 00:08:16.659 "uuid": "7312195b-bd24-483c-aa4f-040f5ad83e58", 00:08:16.659 "strip_size_kb": 64, 00:08:16.659 "state": "online", 00:08:16.659 "raid_level": "raid0", 00:08:16.659 "superblock": false, 00:08:16.659 "num_base_bdevs": 3, 00:08:16.659 "num_base_bdevs_discovered": 3, 00:08:16.659 "num_base_bdevs_operational": 3, 00:08:16.659 "base_bdevs_list": [ 00:08:16.659 { 00:08:16.659 "name": "BaseBdev1", 00:08:16.659 "uuid": "e26f59c3-a993-4826-b030-2bd323beeb90", 00:08:16.659 "is_configured": true, 00:08:16.659 "data_offset": 0, 00:08:16.659 "data_size": 65536 00:08:16.659 }, 00:08:16.659 { 00:08:16.659 "name": "BaseBdev2", 00:08:16.659 "uuid": "426d83c9-f34d-4736-af43-857e108e75e8", 00:08:16.659 "is_configured": true, 00:08:16.659 "data_offset": 0, 00:08:16.659 "data_size": 65536 00:08:16.659 }, 00:08:16.659 { 00:08:16.659 "name": "BaseBdev3", 00:08:16.659 "uuid": "839bbee4-c31d-43bc-82aa-1381083289a8", 00:08:16.659 "is_configured": true, 00:08:16.659 "data_offset": 0, 00:08:16.659 "data_size": 65536 00:08:16.659 } 00:08:16.659 ] 00:08:16.659 } 00:08:16.659 } 00:08:16.659 }' 00:08:16.659 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.659 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:16.659 BaseBdev2 00:08:16.659 BaseBdev3' 00:08:16.659 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.659 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.659 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.660 [2024-12-07 16:34:15.500501] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:16.660 [2024-12-07 16:34:15.500535] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.660 [2024-12-07 16:34:15.500597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.660 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.918 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.918 "name": "Existed_Raid", 00:08:16.918 "uuid": "7312195b-bd24-483c-aa4f-040f5ad83e58", 00:08:16.918 "strip_size_kb": 64, 00:08:16.918 "state": "offline", 00:08:16.918 "raid_level": "raid0", 00:08:16.918 "superblock": false, 00:08:16.918 "num_base_bdevs": 3, 00:08:16.918 "num_base_bdevs_discovered": 2, 00:08:16.918 "num_base_bdevs_operational": 2, 00:08:16.918 "base_bdevs_list": [ 00:08:16.918 { 00:08:16.918 "name": null, 00:08:16.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.918 "is_configured": false, 00:08:16.918 "data_offset": 0, 00:08:16.918 "data_size": 65536 00:08:16.918 }, 00:08:16.918 { 00:08:16.918 "name": "BaseBdev2", 00:08:16.918 "uuid": "426d83c9-f34d-4736-af43-857e108e75e8", 00:08:16.918 "is_configured": true, 00:08:16.918 "data_offset": 0, 00:08:16.918 "data_size": 65536 00:08:16.918 }, 00:08:16.918 { 00:08:16.918 "name": "BaseBdev3", 00:08:16.918 "uuid": "839bbee4-c31d-43bc-82aa-1381083289a8", 00:08:16.918 "is_configured": true, 00:08:16.918 "data_offset": 0, 00:08:16.918 "data_size": 65536 00:08:16.918 } 00:08:16.918 ] 00:08:16.918 }' 00:08:16.919 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.919 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.178 [2024-12-07 16:34:15.972392] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:17.178 16:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.178 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.178 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:17.178 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.178 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:17.178 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.178 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.178 [2024-12-07 16:34:16.052121] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:17.178 [2024-12-07 16:34:16.052178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.438 BaseBdev2 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.438 [ 00:08:17.438 { 00:08:17.438 "name": "BaseBdev2", 00:08:17.438 "aliases": [ 00:08:17.438 "4f10b7bd-e7cf-4f8e-8bcf-a9934c11d53a" 00:08:17.438 ], 00:08:17.438 "product_name": "Malloc disk", 00:08:17.438 "block_size": 512, 00:08:17.438 "num_blocks": 65536, 00:08:17.438 "uuid": "4f10b7bd-e7cf-4f8e-8bcf-a9934c11d53a", 00:08:17.438 "assigned_rate_limits": { 00:08:17.438 "rw_ios_per_sec": 0, 00:08:17.438 "rw_mbytes_per_sec": 0, 00:08:17.438 "r_mbytes_per_sec": 0, 00:08:17.438 "w_mbytes_per_sec": 0 00:08:17.438 }, 00:08:17.438 "claimed": false, 00:08:17.438 "zoned": false, 00:08:17.438 "supported_io_types": { 00:08:17.438 "read": true, 00:08:17.438 "write": true, 00:08:17.438 "unmap": true, 00:08:17.438 "flush": true, 00:08:17.438 "reset": true, 00:08:17.438 "nvme_admin": false, 00:08:17.438 "nvme_io": false, 00:08:17.438 "nvme_io_md": false, 00:08:17.438 "write_zeroes": true, 00:08:17.438 "zcopy": true, 00:08:17.438 "get_zone_info": false, 00:08:17.438 "zone_management": false, 00:08:17.438 "zone_append": false, 00:08:17.438 "compare": false, 00:08:17.438 "compare_and_write": false, 00:08:17.438 "abort": true, 00:08:17.438 "seek_hole": false, 00:08:17.438 "seek_data": false, 00:08:17.438 "copy": true, 00:08:17.438 "nvme_iov_md": false 00:08:17.438 }, 00:08:17.438 "memory_domains": [ 00:08:17.438 { 00:08:17.438 "dma_device_id": "system", 00:08:17.438 "dma_device_type": 1 00:08:17.438 }, 00:08:17.438 { 00:08:17.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.438 "dma_device_type": 2 00:08:17.438 } 00:08:17.438 ], 00:08:17.438 "driver_specific": {} 00:08:17.438 } 00:08:17.438 ] 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.438 BaseBdev3 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.438 [ 00:08:17.438 { 00:08:17.438 "name": "BaseBdev3", 00:08:17.438 "aliases": [ 00:08:17.438 "1ebd1227-18dc-483a-b3f7-40e0da3617aa" 00:08:17.438 ], 00:08:17.438 "product_name": "Malloc disk", 00:08:17.438 "block_size": 512, 00:08:17.438 "num_blocks": 65536, 00:08:17.438 "uuid": "1ebd1227-18dc-483a-b3f7-40e0da3617aa", 00:08:17.438 "assigned_rate_limits": { 00:08:17.438 "rw_ios_per_sec": 0, 00:08:17.438 "rw_mbytes_per_sec": 0, 00:08:17.438 "r_mbytes_per_sec": 0, 00:08:17.438 "w_mbytes_per_sec": 0 00:08:17.438 }, 00:08:17.438 "claimed": false, 00:08:17.438 "zoned": false, 00:08:17.438 "supported_io_types": { 00:08:17.438 "read": true, 00:08:17.438 "write": true, 00:08:17.438 "unmap": true, 00:08:17.438 "flush": true, 00:08:17.438 "reset": true, 00:08:17.438 "nvme_admin": false, 00:08:17.438 "nvme_io": false, 00:08:17.438 "nvme_io_md": false, 00:08:17.438 "write_zeroes": true, 00:08:17.438 "zcopy": true, 00:08:17.438 "get_zone_info": false, 00:08:17.438 "zone_management": false, 00:08:17.438 "zone_append": false, 00:08:17.438 "compare": false, 00:08:17.438 "compare_and_write": false, 00:08:17.438 "abort": true, 00:08:17.438 "seek_hole": false, 00:08:17.438 "seek_data": false, 00:08:17.438 "copy": true, 00:08:17.438 "nvme_iov_md": false 00:08:17.438 }, 00:08:17.438 "memory_domains": [ 00:08:17.438 { 00:08:17.438 "dma_device_id": "system", 00:08:17.438 "dma_device_type": 1 00:08:17.438 }, 00:08:17.438 { 00:08:17.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.438 "dma_device_type": 2 00:08:17.438 } 00:08:17.438 ], 00:08:17.438 "driver_specific": {} 00:08:17.438 } 00:08:17.438 ] 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.438 [2024-12-07 16:34:16.247751] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:17.438 [2024-12-07 16:34:16.247876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:17.438 [2024-12-07 16:34:16.247917] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.438 [2024-12-07 16:34:16.249970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.438 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.439 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.439 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.439 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.439 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.439 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.439 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.439 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.439 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.439 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.439 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.439 "name": "Existed_Raid", 00:08:17.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.439 "strip_size_kb": 64, 00:08:17.439 "state": "configuring", 00:08:17.439 "raid_level": "raid0", 00:08:17.439 "superblock": false, 00:08:17.439 "num_base_bdevs": 3, 00:08:17.439 "num_base_bdevs_discovered": 2, 00:08:17.439 "num_base_bdevs_operational": 3, 00:08:17.439 "base_bdevs_list": [ 00:08:17.439 { 00:08:17.439 "name": "BaseBdev1", 00:08:17.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.439 "is_configured": false, 00:08:17.439 "data_offset": 0, 00:08:17.439 "data_size": 0 00:08:17.439 }, 00:08:17.439 { 00:08:17.439 "name": "BaseBdev2", 00:08:17.439 "uuid": "4f10b7bd-e7cf-4f8e-8bcf-a9934c11d53a", 00:08:17.439 "is_configured": true, 00:08:17.439 "data_offset": 0, 00:08:17.439 "data_size": 65536 00:08:17.439 }, 00:08:17.439 { 00:08:17.439 "name": "BaseBdev3", 00:08:17.439 "uuid": "1ebd1227-18dc-483a-b3f7-40e0da3617aa", 00:08:17.439 "is_configured": true, 00:08:17.439 "data_offset": 0, 00:08:17.439 "data_size": 65536 00:08:17.439 } 00:08:17.439 ] 00:08:17.439 }' 00:08:17.439 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.439 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.007 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:18.007 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.007 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.007 [2024-12-07 16:34:16.711057] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.008 "name": "Existed_Raid", 00:08:18.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.008 "strip_size_kb": 64, 00:08:18.008 "state": "configuring", 00:08:18.008 "raid_level": "raid0", 00:08:18.008 "superblock": false, 00:08:18.008 "num_base_bdevs": 3, 00:08:18.008 "num_base_bdevs_discovered": 1, 00:08:18.008 "num_base_bdevs_operational": 3, 00:08:18.008 "base_bdevs_list": [ 00:08:18.008 { 00:08:18.008 "name": "BaseBdev1", 00:08:18.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.008 "is_configured": false, 00:08:18.008 "data_offset": 0, 00:08:18.008 "data_size": 0 00:08:18.008 }, 00:08:18.008 { 00:08:18.008 "name": null, 00:08:18.008 "uuid": "4f10b7bd-e7cf-4f8e-8bcf-a9934c11d53a", 00:08:18.008 "is_configured": false, 00:08:18.008 "data_offset": 0, 00:08:18.008 "data_size": 65536 00:08:18.008 }, 00:08:18.008 { 00:08:18.008 "name": "BaseBdev3", 00:08:18.008 "uuid": "1ebd1227-18dc-483a-b3f7-40e0da3617aa", 00:08:18.008 "is_configured": true, 00:08:18.008 "data_offset": 0, 00:08:18.008 "data_size": 65536 00:08:18.008 } 00:08:18.008 ] 00:08:18.008 }' 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.008 16:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.266 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.266 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:18.266 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.266 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.266 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.266 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:18.266 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:18.266 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.266 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.535 [2024-12-07 16:34:17.175641] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.535 BaseBdev1 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.535 [ 00:08:18.535 { 00:08:18.535 "name": "BaseBdev1", 00:08:18.535 "aliases": [ 00:08:18.535 "deeb57f9-f975-4239-ad30-69628af91b85" 00:08:18.535 ], 00:08:18.535 "product_name": "Malloc disk", 00:08:18.535 "block_size": 512, 00:08:18.535 "num_blocks": 65536, 00:08:18.535 "uuid": "deeb57f9-f975-4239-ad30-69628af91b85", 00:08:18.535 "assigned_rate_limits": { 00:08:18.535 "rw_ios_per_sec": 0, 00:08:18.535 "rw_mbytes_per_sec": 0, 00:08:18.535 "r_mbytes_per_sec": 0, 00:08:18.535 "w_mbytes_per_sec": 0 00:08:18.535 }, 00:08:18.535 "claimed": true, 00:08:18.535 "claim_type": "exclusive_write", 00:08:18.535 "zoned": false, 00:08:18.535 "supported_io_types": { 00:08:18.535 "read": true, 00:08:18.535 "write": true, 00:08:18.535 "unmap": true, 00:08:18.535 "flush": true, 00:08:18.535 "reset": true, 00:08:18.535 "nvme_admin": false, 00:08:18.535 "nvme_io": false, 00:08:18.535 "nvme_io_md": false, 00:08:18.535 "write_zeroes": true, 00:08:18.535 "zcopy": true, 00:08:18.535 "get_zone_info": false, 00:08:18.535 "zone_management": false, 00:08:18.535 "zone_append": false, 00:08:18.535 "compare": false, 00:08:18.535 "compare_and_write": false, 00:08:18.535 "abort": true, 00:08:18.535 "seek_hole": false, 00:08:18.535 "seek_data": false, 00:08:18.535 "copy": true, 00:08:18.535 "nvme_iov_md": false 00:08:18.535 }, 00:08:18.535 "memory_domains": [ 00:08:18.535 { 00:08:18.535 "dma_device_id": "system", 00:08:18.535 "dma_device_type": 1 00:08:18.535 }, 00:08:18.535 { 00:08:18.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.535 "dma_device_type": 2 00:08:18.535 } 00:08:18.535 ], 00:08:18.535 "driver_specific": {} 00:08:18.535 } 00:08:18.535 ] 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.535 "name": "Existed_Raid", 00:08:18.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.535 "strip_size_kb": 64, 00:08:18.535 "state": "configuring", 00:08:18.535 "raid_level": "raid0", 00:08:18.535 "superblock": false, 00:08:18.535 "num_base_bdevs": 3, 00:08:18.535 "num_base_bdevs_discovered": 2, 00:08:18.535 "num_base_bdevs_operational": 3, 00:08:18.535 "base_bdevs_list": [ 00:08:18.535 { 00:08:18.535 "name": "BaseBdev1", 00:08:18.535 "uuid": "deeb57f9-f975-4239-ad30-69628af91b85", 00:08:18.535 "is_configured": true, 00:08:18.535 "data_offset": 0, 00:08:18.535 "data_size": 65536 00:08:18.535 }, 00:08:18.535 { 00:08:18.535 "name": null, 00:08:18.535 "uuid": "4f10b7bd-e7cf-4f8e-8bcf-a9934c11d53a", 00:08:18.535 "is_configured": false, 00:08:18.535 "data_offset": 0, 00:08:18.535 "data_size": 65536 00:08:18.535 }, 00:08:18.535 { 00:08:18.535 "name": "BaseBdev3", 00:08:18.535 "uuid": "1ebd1227-18dc-483a-b3f7-40e0da3617aa", 00:08:18.535 "is_configured": true, 00:08:18.535 "data_offset": 0, 00:08:18.535 "data_size": 65536 00:08:18.535 } 00:08:18.535 ] 00:08:18.535 }' 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.535 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.808 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.808 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.808 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.808 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:18.808 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.066 [2024-12-07 16:34:17.738982] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.066 "name": "Existed_Raid", 00:08:19.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.066 "strip_size_kb": 64, 00:08:19.066 "state": "configuring", 00:08:19.066 "raid_level": "raid0", 00:08:19.066 "superblock": false, 00:08:19.066 "num_base_bdevs": 3, 00:08:19.066 "num_base_bdevs_discovered": 1, 00:08:19.066 "num_base_bdevs_operational": 3, 00:08:19.066 "base_bdevs_list": [ 00:08:19.066 { 00:08:19.066 "name": "BaseBdev1", 00:08:19.066 "uuid": "deeb57f9-f975-4239-ad30-69628af91b85", 00:08:19.066 "is_configured": true, 00:08:19.066 "data_offset": 0, 00:08:19.066 "data_size": 65536 00:08:19.066 }, 00:08:19.066 { 00:08:19.066 "name": null, 00:08:19.066 "uuid": "4f10b7bd-e7cf-4f8e-8bcf-a9934c11d53a", 00:08:19.066 "is_configured": false, 00:08:19.066 "data_offset": 0, 00:08:19.066 "data_size": 65536 00:08:19.066 }, 00:08:19.066 { 00:08:19.066 "name": null, 00:08:19.066 "uuid": "1ebd1227-18dc-483a-b3f7-40e0da3617aa", 00:08:19.066 "is_configured": false, 00:08:19.066 "data_offset": 0, 00:08:19.066 "data_size": 65536 00:08:19.066 } 00:08:19.066 ] 00:08:19.066 }' 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.066 16:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.325 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.325 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:19.325 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.325 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.584 [2024-12-07 16:34:18.254110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.584 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.584 "name": "Existed_Raid", 00:08:19.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.584 "strip_size_kb": 64, 00:08:19.584 "state": "configuring", 00:08:19.584 "raid_level": "raid0", 00:08:19.584 "superblock": false, 00:08:19.584 "num_base_bdevs": 3, 00:08:19.584 "num_base_bdevs_discovered": 2, 00:08:19.584 "num_base_bdevs_operational": 3, 00:08:19.584 "base_bdevs_list": [ 00:08:19.584 { 00:08:19.584 "name": "BaseBdev1", 00:08:19.584 "uuid": "deeb57f9-f975-4239-ad30-69628af91b85", 00:08:19.584 "is_configured": true, 00:08:19.584 "data_offset": 0, 00:08:19.584 "data_size": 65536 00:08:19.584 }, 00:08:19.584 { 00:08:19.584 "name": null, 00:08:19.584 "uuid": "4f10b7bd-e7cf-4f8e-8bcf-a9934c11d53a", 00:08:19.584 "is_configured": false, 00:08:19.585 "data_offset": 0, 00:08:19.585 "data_size": 65536 00:08:19.585 }, 00:08:19.585 { 00:08:19.585 "name": "BaseBdev3", 00:08:19.585 "uuid": "1ebd1227-18dc-483a-b3f7-40e0da3617aa", 00:08:19.585 "is_configured": true, 00:08:19.585 "data_offset": 0, 00:08:19.585 "data_size": 65536 00:08:19.585 } 00:08:19.585 ] 00:08:19.585 }' 00:08:19.585 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.585 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.843 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.843 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:19.843 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.843 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.843 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.843 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:19.843 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:19.843 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.843 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.102 [2024-12-07 16:34:18.741304] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.102 "name": "Existed_Raid", 00:08:20.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.102 "strip_size_kb": 64, 00:08:20.102 "state": "configuring", 00:08:20.102 "raid_level": "raid0", 00:08:20.102 "superblock": false, 00:08:20.102 "num_base_bdevs": 3, 00:08:20.102 "num_base_bdevs_discovered": 1, 00:08:20.102 "num_base_bdevs_operational": 3, 00:08:20.102 "base_bdevs_list": [ 00:08:20.102 { 00:08:20.102 "name": null, 00:08:20.102 "uuid": "deeb57f9-f975-4239-ad30-69628af91b85", 00:08:20.102 "is_configured": false, 00:08:20.102 "data_offset": 0, 00:08:20.102 "data_size": 65536 00:08:20.102 }, 00:08:20.102 { 00:08:20.102 "name": null, 00:08:20.102 "uuid": "4f10b7bd-e7cf-4f8e-8bcf-a9934c11d53a", 00:08:20.102 "is_configured": false, 00:08:20.102 "data_offset": 0, 00:08:20.102 "data_size": 65536 00:08:20.102 }, 00:08:20.102 { 00:08:20.102 "name": "BaseBdev3", 00:08:20.102 "uuid": "1ebd1227-18dc-483a-b3f7-40e0da3617aa", 00:08:20.102 "is_configured": true, 00:08:20.102 "data_offset": 0, 00:08:20.102 "data_size": 65536 00:08:20.102 } 00:08:20.102 ] 00:08:20.102 }' 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.102 16:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.360 [2024-12-07 16:34:19.220462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.360 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.618 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.618 "name": "Existed_Raid", 00:08:20.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.618 "strip_size_kb": 64, 00:08:20.618 "state": "configuring", 00:08:20.618 "raid_level": "raid0", 00:08:20.618 "superblock": false, 00:08:20.618 "num_base_bdevs": 3, 00:08:20.618 "num_base_bdevs_discovered": 2, 00:08:20.618 "num_base_bdevs_operational": 3, 00:08:20.618 "base_bdevs_list": [ 00:08:20.618 { 00:08:20.618 "name": null, 00:08:20.618 "uuid": "deeb57f9-f975-4239-ad30-69628af91b85", 00:08:20.618 "is_configured": false, 00:08:20.618 "data_offset": 0, 00:08:20.618 "data_size": 65536 00:08:20.618 }, 00:08:20.618 { 00:08:20.618 "name": "BaseBdev2", 00:08:20.618 "uuid": "4f10b7bd-e7cf-4f8e-8bcf-a9934c11d53a", 00:08:20.618 "is_configured": true, 00:08:20.618 "data_offset": 0, 00:08:20.618 "data_size": 65536 00:08:20.618 }, 00:08:20.618 { 00:08:20.618 "name": "BaseBdev3", 00:08:20.618 "uuid": "1ebd1227-18dc-483a-b3f7-40e0da3617aa", 00:08:20.618 "is_configured": true, 00:08:20.618 "data_offset": 0, 00:08:20.618 "data_size": 65536 00:08:20.618 } 00:08:20.618 ] 00:08:20.618 }' 00:08:20.618 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.618 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u deeb57f9-f975-4239-ad30-69628af91b85 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.878 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.878 [2024-12-07 16:34:19.772859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:20.878 [2024-12-07 16:34:19.773012] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:20.878 [2024-12-07 16:34:19.773041] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:20.878 [2024-12-07 16:34:19.773369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:20.878 [2024-12-07 16:34:19.773553] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:20.878 [2024-12-07 16:34:19.773591] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:20.878 [2024-12-07 16:34:19.773838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.878 NewBaseBdev 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.138 [ 00:08:21.138 { 00:08:21.138 "name": "NewBaseBdev", 00:08:21.138 "aliases": [ 00:08:21.138 "deeb57f9-f975-4239-ad30-69628af91b85" 00:08:21.138 ], 00:08:21.138 "product_name": "Malloc disk", 00:08:21.138 "block_size": 512, 00:08:21.138 "num_blocks": 65536, 00:08:21.138 "uuid": "deeb57f9-f975-4239-ad30-69628af91b85", 00:08:21.138 "assigned_rate_limits": { 00:08:21.138 "rw_ios_per_sec": 0, 00:08:21.138 "rw_mbytes_per_sec": 0, 00:08:21.138 "r_mbytes_per_sec": 0, 00:08:21.138 "w_mbytes_per_sec": 0 00:08:21.138 }, 00:08:21.138 "claimed": true, 00:08:21.138 "claim_type": "exclusive_write", 00:08:21.138 "zoned": false, 00:08:21.138 "supported_io_types": { 00:08:21.138 "read": true, 00:08:21.138 "write": true, 00:08:21.138 "unmap": true, 00:08:21.138 "flush": true, 00:08:21.138 "reset": true, 00:08:21.138 "nvme_admin": false, 00:08:21.138 "nvme_io": false, 00:08:21.138 "nvme_io_md": false, 00:08:21.138 "write_zeroes": true, 00:08:21.138 "zcopy": true, 00:08:21.138 "get_zone_info": false, 00:08:21.138 "zone_management": false, 00:08:21.138 "zone_append": false, 00:08:21.138 "compare": false, 00:08:21.138 "compare_and_write": false, 00:08:21.138 "abort": true, 00:08:21.138 "seek_hole": false, 00:08:21.138 "seek_data": false, 00:08:21.138 "copy": true, 00:08:21.138 "nvme_iov_md": false 00:08:21.138 }, 00:08:21.138 "memory_domains": [ 00:08:21.138 { 00:08:21.138 "dma_device_id": "system", 00:08:21.138 "dma_device_type": 1 00:08:21.138 }, 00:08:21.138 { 00:08:21.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.138 "dma_device_type": 2 00:08:21.138 } 00:08:21.138 ], 00:08:21.138 "driver_specific": {} 00:08:21.138 } 00:08:21.138 ] 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.138 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.138 "name": "Existed_Raid", 00:08:21.138 "uuid": "2b12f342-756b-48cd-95d2-82aba4d77206", 00:08:21.138 "strip_size_kb": 64, 00:08:21.138 "state": "online", 00:08:21.138 "raid_level": "raid0", 00:08:21.138 "superblock": false, 00:08:21.138 "num_base_bdevs": 3, 00:08:21.138 "num_base_bdevs_discovered": 3, 00:08:21.138 "num_base_bdevs_operational": 3, 00:08:21.138 "base_bdevs_list": [ 00:08:21.138 { 00:08:21.138 "name": "NewBaseBdev", 00:08:21.138 "uuid": "deeb57f9-f975-4239-ad30-69628af91b85", 00:08:21.138 "is_configured": true, 00:08:21.138 "data_offset": 0, 00:08:21.138 "data_size": 65536 00:08:21.138 }, 00:08:21.138 { 00:08:21.138 "name": "BaseBdev2", 00:08:21.138 "uuid": "4f10b7bd-e7cf-4f8e-8bcf-a9934c11d53a", 00:08:21.138 "is_configured": true, 00:08:21.138 "data_offset": 0, 00:08:21.138 "data_size": 65536 00:08:21.138 }, 00:08:21.138 { 00:08:21.138 "name": "BaseBdev3", 00:08:21.138 "uuid": "1ebd1227-18dc-483a-b3f7-40e0da3617aa", 00:08:21.138 "is_configured": true, 00:08:21.138 "data_offset": 0, 00:08:21.138 "data_size": 65536 00:08:21.138 } 00:08:21.138 ] 00:08:21.138 }' 00:08:21.139 16:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.139 16:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.399 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:21.399 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:21.399 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.399 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.399 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.399 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.399 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:21.399 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.399 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.399 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.399 [2024-12-07 16:34:20.256370] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.399 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.399 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.399 "name": "Existed_Raid", 00:08:21.399 "aliases": [ 00:08:21.399 "2b12f342-756b-48cd-95d2-82aba4d77206" 00:08:21.399 ], 00:08:21.399 "product_name": "Raid Volume", 00:08:21.399 "block_size": 512, 00:08:21.399 "num_blocks": 196608, 00:08:21.399 "uuid": "2b12f342-756b-48cd-95d2-82aba4d77206", 00:08:21.399 "assigned_rate_limits": { 00:08:21.399 "rw_ios_per_sec": 0, 00:08:21.399 "rw_mbytes_per_sec": 0, 00:08:21.399 "r_mbytes_per_sec": 0, 00:08:21.399 "w_mbytes_per_sec": 0 00:08:21.399 }, 00:08:21.399 "claimed": false, 00:08:21.399 "zoned": false, 00:08:21.399 "supported_io_types": { 00:08:21.399 "read": true, 00:08:21.399 "write": true, 00:08:21.399 "unmap": true, 00:08:21.399 "flush": true, 00:08:21.399 "reset": true, 00:08:21.399 "nvme_admin": false, 00:08:21.399 "nvme_io": false, 00:08:21.399 "nvme_io_md": false, 00:08:21.399 "write_zeroes": true, 00:08:21.399 "zcopy": false, 00:08:21.399 "get_zone_info": false, 00:08:21.399 "zone_management": false, 00:08:21.399 "zone_append": false, 00:08:21.399 "compare": false, 00:08:21.399 "compare_and_write": false, 00:08:21.399 "abort": false, 00:08:21.399 "seek_hole": false, 00:08:21.399 "seek_data": false, 00:08:21.399 "copy": false, 00:08:21.399 "nvme_iov_md": false 00:08:21.399 }, 00:08:21.399 "memory_domains": [ 00:08:21.399 { 00:08:21.399 "dma_device_id": "system", 00:08:21.399 "dma_device_type": 1 00:08:21.399 }, 00:08:21.399 { 00:08:21.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.399 "dma_device_type": 2 00:08:21.399 }, 00:08:21.399 { 00:08:21.399 "dma_device_id": "system", 00:08:21.399 "dma_device_type": 1 00:08:21.399 }, 00:08:21.399 { 00:08:21.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.399 "dma_device_type": 2 00:08:21.399 }, 00:08:21.399 { 00:08:21.399 "dma_device_id": "system", 00:08:21.399 "dma_device_type": 1 00:08:21.399 }, 00:08:21.399 { 00:08:21.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.400 "dma_device_type": 2 00:08:21.400 } 00:08:21.400 ], 00:08:21.400 "driver_specific": { 00:08:21.400 "raid": { 00:08:21.400 "uuid": "2b12f342-756b-48cd-95d2-82aba4d77206", 00:08:21.400 "strip_size_kb": 64, 00:08:21.400 "state": "online", 00:08:21.400 "raid_level": "raid0", 00:08:21.400 "superblock": false, 00:08:21.400 "num_base_bdevs": 3, 00:08:21.400 "num_base_bdevs_discovered": 3, 00:08:21.400 "num_base_bdevs_operational": 3, 00:08:21.400 "base_bdevs_list": [ 00:08:21.400 { 00:08:21.400 "name": "NewBaseBdev", 00:08:21.400 "uuid": "deeb57f9-f975-4239-ad30-69628af91b85", 00:08:21.400 "is_configured": true, 00:08:21.400 "data_offset": 0, 00:08:21.400 "data_size": 65536 00:08:21.400 }, 00:08:21.400 { 00:08:21.400 "name": "BaseBdev2", 00:08:21.400 "uuid": "4f10b7bd-e7cf-4f8e-8bcf-a9934c11d53a", 00:08:21.400 "is_configured": true, 00:08:21.400 "data_offset": 0, 00:08:21.400 "data_size": 65536 00:08:21.400 }, 00:08:21.400 { 00:08:21.400 "name": "BaseBdev3", 00:08:21.400 "uuid": "1ebd1227-18dc-483a-b3f7-40e0da3617aa", 00:08:21.400 "is_configured": true, 00:08:21.400 "data_offset": 0, 00:08:21.400 "data_size": 65536 00:08:21.400 } 00:08:21.400 ] 00:08:21.400 } 00:08:21.400 } 00:08:21.400 }' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:21.660 BaseBdev2 00:08:21.660 BaseBdev3' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.660 [2024-12-07 16:34:20.495612] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.660 [2024-12-07 16:34:20.495645] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.660 [2024-12-07 16:34:20.495729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.660 [2024-12-07 16:34:20.495790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.660 [2024-12-07 16:34:20.495803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75296 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75296 ']' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75296 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75296 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75296' 00:08:21.660 killing process with pid 75296 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75296 00:08:21.660 [2024-12-07 16:34:20.545664] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.660 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75296 00:08:21.920 [2024-12-07 16:34:20.601625] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.179 16:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:22.179 00:08:22.179 real 0m8.977s 00:08:22.179 user 0m14.983s 00:08:22.179 sys 0m1.932s 00:08:22.179 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.179 16:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.179 ************************************ 00:08:22.179 END TEST raid_state_function_test 00:08:22.179 ************************************ 00:08:22.179 16:34:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:22.179 16:34:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:22.179 16:34:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.179 16:34:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.179 ************************************ 00:08:22.179 START TEST raid_state_function_test_sb 00:08:22.179 ************************************ 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75900 00:08:22.179 Process raid pid: 75900 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75900' 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75900 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75900 ']' 00:08:22.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.179 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.439 [2024-12-07 16:34:21.148808] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:22.439 [2024-12-07 16:34:21.148943] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.439 [2024-12-07 16:34:21.314145] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.699 [2024-12-07 16:34:21.387806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.699 [2024-12-07 16:34:21.464358] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.699 [2024-12-07 16:34:21.464414] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.268 [2024-12-07 16:34:21.956408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.268 [2024-12-07 16:34:21.956469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.268 [2024-12-07 16:34:21.956484] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.268 [2024-12-07 16:34:21.956494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.268 [2024-12-07 16:34:21.956500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:23.268 [2024-12-07 16:34:21.956512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.268 16:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.268 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.268 "name": "Existed_Raid", 00:08:23.268 "uuid": "e2cfea51-22db-4132-b42b-48a1821d1041", 00:08:23.268 "strip_size_kb": 64, 00:08:23.268 "state": "configuring", 00:08:23.268 "raid_level": "raid0", 00:08:23.268 "superblock": true, 00:08:23.268 "num_base_bdevs": 3, 00:08:23.268 "num_base_bdevs_discovered": 0, 00:08:23.268 "num_base_bdevs_operational": 3, 00:08:23.268 "base_bdevs_list": [ 00:08:23.269 { 00:08:23.269 "name": "BaseBdev1", 00:08:23.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.269 "is_configured": false, 00:08:23.269 "data_offset": 0, 00:08:23.269 "data_size": 0 00:08:23.269 }, 00:08:23.269 { 00:08:23.269 "name": "BaseBdev2", 00:08:23.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.269 "is_configured": false, 00:08:23.269 "data_offset": 0, 00:08:23.269 "data_size": 0 00:08:23.269 }, 00:08:23.269 { 00:08:23.269 "name": "BaseBdev3", 00:08:23.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.269 "is_configured": false, 00:08:23.269 "data_offset": 0, 00:08:23.269 "data_size": 0 00:08:23.269 } 00:08:23.269 ] 00:08:23.269 }' 00:08:23.269 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.269 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.529 [2024-12-07 16:34:22.343634] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.529 [2024-12-07 16:34:22.343683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.529 [2024-12-07 16:34:22.355644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.529 [2024-12-07 16:34:22.355727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.529 [2024-12-07 16:34:22.355755] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.529 [2024-12-07 16:34:22.355778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.529 [2024-12-07 16:34:22.355795] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:23.529 [2024-12-07 16:34:22.355815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.529 [2024-12-07 16:34:22.382440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.529 BaseBdev1 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.529 [ 00:08:23.529 { 00:08:23.529 "name": "BaseBdev1", 00:08:23.529 "aliases": [ 00:08:23.529 "840a4e8b-9797-407d-9e36-3412966177d5" 00:08:23.529 ], 00:08:23.529 "product_name": "Malloc disk", 00:08:23.529 "block_size": 512, 00:08:23.529 "num_blocks": 65536, 00:08:23.529 "uuid": "840a4e8b-9797-407d-9e36-3412966177d5", 00:08:23.529 "assigned_rate_limits": { 00:08:23.529 "rw_ios_per_sec": 0, 00:08:23.529 "rw_mbytes_per_sec": 0, 00:08:23.529 "r_mbytes_per_sec": 0, 00:08:23.529 "w_mbytes_per_sec": 0 00:08:23.529 }, 00:08:23.529 "claimed": true, 00:08:23.529 "claim_type": "exclusive_write", 00:08:23.529 "zoned": false, 00:08:23.529 "supported_io_types": { 00:08:23.529 "read": true, 00:08:23.529 "write": true, 00:08:23.529 "unmap": true, 00:08:23.529 "flush": true, 00:08:23.529 "reset": true, 00:08:23.529 "nvme_admin": false, 00:08:23.529 "nvme_io": false, 00:08:23.529 "nvme_io_md": false, 00:08:23.529 "write_zeroes": true, 00:08:23.529 "zcopy": true, 00:08:23.529 "get_zone_info": false, 00:08:23.529 "zone_management": false, 00:08:23.529 "zone_append": false, 00:08:23.529 "compare": false, 00:08:23.529 "compare_and_write": false, 00:08:23.529 "abort": true, 00:08:23.529 "seek_hole": false, 00:08:23.529 "seek_data": false, 00:08:23.529 "copy": true, 00:08:23.529 "nvme_iov_md": false 00:08:23.529 }, 00:08:23.529 "memory_domains": [ 00:08:23.529 { 00:08:23.529 "dma_device_id": "system", 00:08:23.529 "dma_device_type": 1 00:08:23.529 }, 00:08:23.529 { 00:08:23.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.529 "dma_device_type": 2 00:08:23.529 } 00:08:23.529 ], 00:08:23.529 "driver_specific": {} 00:08:23.529 } 00:08:23.529 ] 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.529 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.530 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.790 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.790 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.790 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.790 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.790 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.790 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.790 "name": "Existed_Raid", 00:08:23.790 "uuid": "2f39ec30-a80a-4dee-b2bc-db099239b470", 00:08:23.790 "strip_size_kb": 64, 00:08:23.790 "state": "configuring", 00:08:23.790 "raid_level": "raid0", 00:08:23.790 "superblock": true, 00:08:23.790 "num_base_bdevs": 3, 00:08:23.790 "num_base_bdevs_discovered": 1, 00:08:23.790 "num_base_bdevs_operational": 3, 00:08:23.790 "base_bdevs_list": [ 00:08:23.790 { 00:08:23.790 "name": "BaseBdev1", 00:08:23.790 "uuid": "840a4e8b-9797-407d-9e36-3412966177d5", 00:08:23.790 "is_configured": true, 00:08:23.791 "data_offset": 2048, 00:08:23.791 "data_size": 63488 00:08:23.791 }, 00:08:23.791 { 00:08:23.791 "name": "BaseBdev2", 00:08:23.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.791 "is_configured": false, 00:08:23.791 "data_offset": 0, 00:08:23.791 "data_size": 0 00:08:23.791 }, 00:08:23.791 { 00:08:23.791 "name": "BaseBdev3", 00:08:23.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.791 "is_configured": false, 00:08:23.791 "data_offset": 0, 00:08:23.791 "data_size": 0 00:08:23.791 } 00:08:23.791 ] 00:08:23.791 }' 00:08:23.791 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.791 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.051 [2024-12-07 16:34:22.841662] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.051 [2024-12-07 16:34:22.841818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.051 [2024-12-07 16:34:22.853757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.051 [2024-12-07 16:34:22.855923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.051 [2024-12-07 16:34:22.856000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.051 [2024-12-07 16:34:22.856029] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:24.051 [2024-12-07 16:34:22.856052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.051 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.051 "name": "Existed_Raid", 00:08:24.051 "uuid": "4f2aa59c-3eb1-4b01-b335-efa4806280af", 00:08:24.051 "strip_size_kb": 64, 00:08:24.051 "state": "configuring", 00:08:24.051 "raid_level": "raid0", 00:08:24.051 "superblock": true, 00:08:24.051 "num_base_bdevs": 3, 00:08:24.051 "num_base_bdevs_discovered": 1, 00:08:24.051 "num_base_bdevs_operational": 3, 00:08:24.051 "base_bdevs_list": [ 00:08:24.051 { 00:08:24.051 "name": "BaseBdev1", 00:08:24.052 "uuid": "840a4e8b-9797-407d-9e36-3412966177d5", 00:08:24.052 "is_configured": true, 00:08:24.052 "data_offset": 2048, 00:08:24.052 "data_size": 63488 00:08:24.052 }, 00:08:24.052 { 00:08:24.052 "name": "BaseBdev2", 00:08:24.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.052 "is_configured": false, 00:08:24.052 "data_offset": 0, 00:08:24.052 "data_size": 0 00:08:24.052 }, 00:08:24.052 { 00:08:24.052 "name": "BaseBdev3", 00:08:24.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.052 "is_configured": false, 00:08:24.052 "data_offset": 0, 00:08:24.052 "data_size": 0 00:08:24.052 } 00:08:24.052 ] 00:08:24.052 }' 00:08:24.052 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.052 16:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.621 [2024-12-07 16:34:23.323328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.621 BaseBdev2 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.621 [ 00:08:24.621 { 00:08:24.621 "name": "BaseBdev2", 00:08:24.621 "aliases": [ 00:08:24.621 "94a747aa-ce56-4431-b6fd-cefebba4dbcb" 00:08:24.621 ], 00:08:24.621 "product_name": "Malloc disk", 00:08:24.621 "block_size": 512, 00:08:24.621 "num_blocks": 65536, 00:08:24.621 "uuid": "94a747aa-ce56-4431-b6fd-cefebba4dbcb", 00:08:24.621 "assigned_rate_limits": { 00:08:24.621 "rw_ios_per_sec": 0, 00:08:24.621 "rw_mbytes_per_sec": 0, 00:08:24.621 "r_mbytes_per_sec": 0, 00:08:24.621 "w_mbytes_per_sec": 0 00:08:24.621 }, 00:08:24.621 "claimed": true, 00:08:24.621 "claim_type": "exclusive_write", 00:08:24.621 "zoned": false, 00:08:24.621 "supported_io_types": { 00:08:24.621 "read": true, 00:08:24.621 "write": true, 00:08:24.621 "unmap": true, 00:08:24.621 "flush": true, 00:08:24.621 "reset": true, 00:08:24.621 "nvme_admin": false, 00:08:24.621 "nvme_io": false, 00:08:24.621 "nvme_io_md": false, 00:08:24.621 "write_zeroes": true, 00:08:24.621 "zcopy": true, 00:08:24.621 "get_zone_info": false, 00:08:24.621 "zone_management": false, 00:08:24.621 "zone_append": false, 00:08:24.621 "compare": false, 00:08:24.621 "compare_and_write": false, 00:08:24.621 "abort": true, 00:08:24.621 "seek_hole": false, 00:08:24.621 "seek_data": false, 00:08:24.621 "copy": true, 00:08:24.621 "nvme_iov_md": false 00:08:24.621 }, 00:08:24.621 "memory_domains": [ 00:08:24.621 { 00:08:24.621 "dma_device_id": "system", 00:08:24.621 "dma_device_type": 1 00:08:24.621 }, 00:08:24.621 { 00:08:24.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.621 "dma_device_type": 2 00:08:24.621 } 00:08:24.621 ], 00:08:24.621 "driver_specific": {} 00:08:24.621 } 00:08:24.621 ] 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.621 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.621 "name": "Existed_Raid", 00:08:24.621 "uuid": "4f2aa59c-3eb1-4b01-b335-efa4806280af", 00:08:24.621 "strip_size_kb": 64, 00:08:24.622 "state": "configuring", 00:08:24.622 "raid_level": "raid0", 00:08:24.622 "superblock": true, 00:08:24.622 "num_base_bdevs": 3, 00:08:24.622 "num_base_bdevs_discovered": 2, 00:08:24.622 "num_base_bdevs_operational": 3, 00:08:24.622 "base_bdevs_list": [ 00:08:24.622 { 00:08:24.622 "name": "BaseBdev1", 00:08:24.622 "uuid": "840a4e8b-9797-407d-9e36-3412966177d5", 00:08:24.622 "is_configured": true, 00:08:24.622 "data_offset": 2048, 00:08:24.622 "data_size": 63488 00:08:24.622 }, 00:08:24.622 { 00:08:24.622 "name": "BaseBdev2", 00:08:24.622 "uuid": "94a747aa-ce56-4431-b6fd-cefebba4dbcb", 00:08:24.622 "is_configured": true, 00:08:24.622 "data_offset": 2048, 00:08:24.622 "data_size": 63488 00:08:24.622 }, 00:08:24.622 { 00:08:24.622 "name": "BaseBdev3", 00:08:24.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.622 "is_configured": false, 00:08:24.622 "data_offset": 0, 00:08:24.622 "data_size": 0 00:08:24.622 } 00:08:24.622 ] 00:08:24.622 }' 00:08:24.622 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.622 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.192 [2024-12-07 16:34:23.803472] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.192 [2024-12-07 16:34:23.803706] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:25.192 [2024-12-07 16:34:23.803729] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:25.192 BaseBdev3 00:08:25.192 [2024-12-07 16:34:23.804050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:25.192 [2024-12-07 16:34:23.804199] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:25.192 [2024-12-07 16:34:23.804215] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:25.192 [2024-12-07 16:34:23.804345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.192 [ 00:08:25.192 { 00:08:25.192 "name": "BaseBdev3", 00:08:25.192 "aliases": [ 00:08:25.192 "ce7963ec-5145-4bf8-8440-5bb1ca9cdf47" 00:08:25.192 ], 00:08:25.192 "product_name": "Malloc disk", 00:08:25.192 "block_size": 512, 00:08:25.192 "num_blocks": 65536, 00:08:25.192 "uuid": "ce7963ec-5145-4bf8-8440-5bb1ca9cdf47", 00:08:25.192 "assigned_rate_limits": { 00:08:25.192 "rw_ios_per_sec": 0, 00:08:25.192 "rw_mbytes_per_sec": 0, 00:08:25.192 "r_mbytes_per_sec": 0, 00:08:25.192 "w_mbytes_per_sec": 0 00:08:25.192 }, 00:08:25.192 "claimed": true, 00:08:25.192 "claim_type": "exclusive_write", 00:08:25.192 "zoned": false, 00:08:25.192 "supported_io_types": { 00:08:25.192 "read": true, 00:08:25.192 "write": true, 00:08:25.192 "unmap": true, 00:08:25.192 "flush": true, 00:08:25.192 "reset": true, 00:08:25.192 "nvme_admin": false, 00:08:25.192 "nvme_io": false, 00:08:25.192 "nvme_io_md": false, 00:08:25.192 "write_zeroes": true, 00:08:25.192 "zcopy": true, 00:08:25.192 "get_zone_info": false, 00:08:25.192 "zone_management": false, 00:08:25.192 "zone_append": false, 00:08:25.192 "compare": false, 00:08:25.192 "compare_and_write": false, 00:08:25.192 "abort": true, 00:08:25.192 "seek_hole": false, 00:08:25.192 "seek_data": false, 00:08:25.192 "copy": true, 00:08:25.192 "nvme_iov_md": false 00:08:25.192 }, 00:08:25.192 "memory_domains": [ 00:08:25.192 { 00:08:25.192 "dma_device_id": "system", 00:08:25.192 "dma_device_type": 1 00:08:25.192 }, 00:08:25.192 { 00:08:25.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.192 "dma_device_type": 2 00:08:25.192 } 00:08:25.192 ], 00:08:25.192 "driver_specific": {} 00:08:25.192 } 00:08:25.192 ] 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.192 "name": "Existed_Raid", 00:08:25.192 "uuid": "4f2aa59c-3eb1-4b01-b335-efa4806280af", 00:08:25.192 "strip_size_kb": 64, 00:08:25.192 "state": "online", 00:08:25.192 "raid_level": "raid0", 00:08:25.192 "superblock": true, 00:08:25.192 "num_base_bdevs": 3, 00:08:25.192 "num_base_bdevs_discovered": 3, 00:08:25.192 "num_base_bdevs_operational": 3, 00:08:25.192 "base_bdevs_list": [ 00:08:25.192 { 00:08:25.192 "name": "BaseBdev1", 00:08:25.192 "uuid": "840a4e8b-9797-407d-9e36-3412966177d5", 00:08:25.192 "is_configured": true, 00:08:25.192 "data_offset": 2048, 00:08:25.192 "data_size": 63488 00:08:25.192 }, 00:08:25.192 { 00:08:25.192 "name": "BaseBdev2", 00:08:25.193 "uuid": "94a747aa-ce56-4431-b6fd-cefebba4dbcb", 00:08:25.193 "is_configured": true, 00:08:25.193 "data_offset": 2048, 00:08:25.193 "data_size": 63488 00:08:25.193 }, 00:08:25.193 { 00:08:25.193 "name": "BaseBdev3", 00:08:25.193 "uuid": "ce7963ec-5145-4bf8-8440-5bb1ca9cdf47", 00:08:25.193 "is_configured": true, 00:08:25.193 "data_offset": 2048, 00:08:25.193 "data_size": 63488 00:08:25.193 } 00:08:25.193 ] 00:08:25.193 }' 00:08:25.193 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.193 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.453 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:25.453 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:25.453 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:25.453 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:25.453 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:25.453 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:25.453 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:25.453 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:25.453 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.453 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.453 [2024-12-07 16:34:24.299148] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.453 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.453 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:25.453 "name": "Existed_Raid", 00:08:25.453 "aliases": [ 00:08:25.453 "4f2aa59c-3eb1-4b01-b335-efa4806280af" 00:08:25.453 ], 00:08:25.453 "product_name": "Raid Volume", 00:08:25.453 "block_size": 512, 00:08:25.453 "num_blocks": 190464, 00:08:25.453 "uuid": "4f2aa59c-3eb1-4b01-b335-efa4806280af", 00:08:25.453 "assigned_rate_limits": { 00:08:25.453 "rw_ios_per_sec": 0, 00:08:25.453 "rw_mbytes_per_sec": 0, 00:08:25.453 "r_mbytes_per_sec": 0, 00:08:25.453 "w_mbytes_per_sec": 0 00:08:25.453 }, 00:08:25.453 "claimed": false, 00:08:25.453 "zoned": false, 00:08:25.453 "supported_io_types": { 00:08:25.453 "read": true, 00:08:25.453 "write": true, 00:08:25.453 "unmap": true, 00:08:25.453 "flush": true, 00:08:25.453 "reset": true, 00:08:25.453 "nvme_admin": false, 00:08:25.453 "nvme_io": false, 00:08:25.453 "nvme_io_md": false, 00:08:25.453 "write_zeroes": true, 00:08:25.453 "zcopy": false, 00:08:25.453 "get_zone_info": false, 00:08:25.453 "zone_management": false, 00:08:25.453 "zone_append": false, 00:08:25.453 "compare": false, 00:08:25.453 "compare_and_write": false, 00:08:25.453 "abort": false, 00:08:25.453 "seek_hole": false, 00:08:25.453 "seek_data": false, 00:08:25.453 "copy": false, 00:08:25.453 "nvme_iov_md": false 00:08:25.453 }, 00:08:25.453 "memory_domains": [ 00:08:25.453 { 00:08:25.453 "dma_device_id": "system", 00:08:25.453 "dma_device_type": 1 00:08:25.453 }, 00:08:25.453 { 00:08:25.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.453 "dma_device_type": 2 00:08:25.453 }, 00:08:25.453 { 00:08:25.453 "dma_device_id": "system", 00:08:25.453 "dma_device_type": 1 00:08:25.453 }, 00:08:25.453 { 00:08:25.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.453 "dma_device_type": 2 00:08:25.453 }, 00:08:25.453 { 00:08:25.453 "dma_device_id": "system", 00:08:25.453 "dma_device_type": 1 00:08:25.453 }, 00:08:25.453 { 00:08:25.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.453 "dma_device_type": 2 00:08:25.453 } 00:08:25.453 ], 00:08:25.453 "driver_specific": { 00:08:25.453 "raid": { 00:08:25.453 "uuid": "4f2aa59c-3eb1-4b01-b335-efa4806280af", 00:08:25.453 "strip_size_kb": 64, 00:08:25.453 "state": "online", 00:08:25.453 "raid_level": "raid0", 00:08:25.453 "superblock": true, 00:08:25.453 "num_base_bdevs": 3, 00:08:25.453 "num_base_bdevs_discovered": 3, 00:08:25.453 "num_base_bdevs_operational": 3, 00:08:25.453 "base_bdevs_list": [ 00:08:25.453 { 00:08:25.453 "name": "BaseBdev1", 00:08:25.453 "uuid": "840a4e8b-9797-407d-9e36-3412966177d5", 00:08:25.453 "is_configured": true, 00:08:25.453 "data_offset": 2048, 00:08:25.453 "data_size": 63488 00:08:25.453 }, 00:08:25.453 { 00:08:25.453 "name": "BaseBdev2", 00:08:25.453 "uuid": "94a747aa-ce56-4431-b6fd-cefebba4dbcb", 00:08:25.453 "is_configured": true, 00:08:25.453 "data_offset": 2048, 00:08:25.453 "data_size": 63488 00:08:25.453 }, 00:08:25.453 { 00:08:25.453 "name": "BaseBdev3", 00:08:25.453 "uuid": "ce7963ec-5145-4bf8-8440-5bb1ca9cdf47", 00:08:25.453 "is_configured": true, 00:08:25.453 "data_offset": 2048, 00:08:25.453 "data_size": 63488 00:08:25.453 } 00:08:25.453 ] 00:08:25.453 } 00:08:25.453 } 00:08:25.453 }' 00:08:25.453 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:25.714 BaseBdev2 00:08:25.714 BaseBdev3' 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.714 [2024-12-07 16:34:24.570428] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:25.714 [2024-12-07 16:34:24.570510] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.714 [2024-12-07 16:34:24.570582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.714 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.975 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.975 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.975 "name": "Existed_Raid", 00:08:25.975 "uuid": "4f2aa59c-3eb1-4b01-b335-efa4806280af", 00:08:25.975 "strip_size_kb": 64, 00:08:25.975 "state": "offline", 00:08:25.975 "raid_level": "raid0", 00:08:25.975 "superblock": true, 00:08:25.975 "num_base_bdevs": 3, 00:08:25.975 "num_base_bdevs_discovered": 2, 00:08:25.975 "num_base_bdevs_operational": 2, 00:08:25.975 "base_bdevs_list": [ 00:08:25.975 { 00:08:25.975 "name": null, 00:08:25.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.975 "is_configured": false, 00:08:25.975 "data_offset": 0, 00:08:25.975 "data_size": 63488 00:08:25.975 }, 00:08:25.975 { 00:08:25.975 "name": "BaseBdev2", 00:08:25.975 "uuid": "94a747aa-ce56-4431-b6fd-cefebba4dbcb", 00:08:25.975 "is_configured": true, 00:08:25.975 "data_offset": 2048, 00:08:25.975 "data_size": 63488 00:08:25.975 }, 00:08:25.975 { 00:08:25.975 "name": "BaseBdev3", 00:08:25.975 "uuid": "ce7963ec-5145-4bf8-8440-5bb1ca9cdf47", 00:08:25.975 "is_configured": true, 00:08:25.975 "data_offset": 2048, 00:08:25.975 "data_size": 63488 00:08:25.975 } 00:08:25.975 ] 00:08:25.975 }' 00:08:25.975 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.975 16:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.235 [2024-12-07 16:34:25.078183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:26.235 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.495 [2024-12-07 16:34:25.157676] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:26.495 [2024-12-07 16:34:25.157738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.495 BaseBdev2 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.495 [ 00:08:26.495 { 00:08:26.495 "name": "BaseBdev2", 00:08:26.495 "aliases": [ 00:08:26.495 "e27f114d-0de5-4802-b935-4ee97778f238" 00:08:26.495 ], 00:08:26.495 "product_name": "Malloc disk", 00:08:26.495 "block_size": 512, 00:08:26.495 "num_blocks": 65536, 00:08:26.495 "uuid": "e27f114d-0de5-4802-b935-4ee97778f238", 00:08:26.495 "assigned_rate_limits": { 00:08:26.495 "rw_ios_per_sec": 0, 00:08:26.495 "rw_mbytes_per_sec": 0, 00:08:26.495 "r_mbytes_per_sec": 0, 00:08:26.495 "w_mbytes_per_sec": 0 00:08:26.495 }, 00:08:26.495 "claimed": false, 00:08:26.495 "zoned": false, 00:08:26.495 "supported_io_types": { 00:08:26.495 "read": true, 00:08:26.495 "write": true, 00:08:26.495 "unmap": true, 00:08:26.495 "flush": true, 00:08:26.495 "reset": true, 00:08:26.495 "nvme_admin": false, 00:08:26.495 "nvme_io": false, 00:08:26.495 "nvme_io_md": false, 00:08:26.495 "write_zeroes": true, 00:08:26.495 "zcopy": true, 00:08:26.495 "get_zone_info": false, 00:08:26.495 "zone_management": false, 00:08:26.495 "zone_append": false, 00:08:26.495 "compare": false, 00:08:26.495 "compare_and_write": false, 00:08:26.495 "abort": true, 00:08:26.495 "seek_hole": false, 00:08:26.495 "seek_data": false, 00:08:26.495 "copy": true, 00:08:26.495 "nvme_iov_md": false 00:08:26.495 }, 00:08:26.495 "memory_domains": [ 00:08:26.495 { 00:08:26.495 "dma_device_id": "system", 00:08:26.495 "dma_device_type": 1 00:08:26.495 }, 00:08:26.495 { 00:08:26.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.495 "dma_device_type": 2 00:08:26.495 } 00:08:26.495 ], 00:08:26.495 "driver_specific": {} 00:08:26.495 } 00:08:26.495 ] 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.495 BaseBdev3 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.495 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.496 [ 00:08:26.496 { 00:08:26.496 "name": "BaseBdev3", 00:08:26.496 "aliases": [ 00:08:26.496 "063b62ca-6721-40b0-be99-1744210bc073" 00:08:26.496 ], 00:08:26.496 "product_name": "Malloc disk", 00:08:26.496 "block_size": 512, 00:08:26.496 "num_blocks": 65536, 00:08:26.496 "uuid": "063b62ca-6721-40b0-be99-1744210bc073", 00:08:26.496 "assigned_rate_limits": { 00:08:26.496 "rw_ios_per_sec": 0, 00:08:26.496 "rw_mbytes_per_sec": 0, 00:08:26.496 "r_mbytes_per_sec": 0, 00:08:26.496 "w_mbytes_per_sec": 0 00:08:26.496 }, 00:08:26.496 "claimed": false, 00:08:26.496 "zoned": false, 00:08:26.496 "supported_io_types": { 00:08:26.496 "read": true, 00:08:26.496 "write": true, 00:08:26.496 "unmap": true, 00:08:26.496 "flush": true, 00:08:26.496 "reset": true, 00:08:26.496 "nvme_admin": false, 00:08:26.496 "nvme_io": false, 00:08:26.496 "nvme_io_md": false, 00:08:26.496 "write_zeroes": true, 00:08:26.496 "zcopy": true, 00:08:26.496 "get_zone_info": false, 00:08:26.496 "zone_management": false, 00:08:26.496 "zone_append": false, 00:08:26.496 "compare": false, 00:08:26.496 "compare_and_write": false, 00:08:26.496 "abort": true, 00:08:26.496 "seek_hole": false, 00:08:26.496 "seek_data": false, 00:08:26.496 "copy": true, 00:08:26.496 "nvme_iov_md": false 00:08:26.496 }, 00:08:26.496 "memory_domains": [ 00:08:26.496 { 00:08:26.496 "dma_device_id": "system", 00:08:26.496 "dma_device_type": 1 00:08:26.496 }, 00:08:26.496 { 00:08:26.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.496 "dma_device_type": 2 00:08:26.496 } 00:08:26.496 ], 00:08:26.496 "driver_specific": {} 00:08:26.496 } 00:08:26.496 ] 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.496 [2024-12-07 16:34:25.346721] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.496 [2024-12-07 16:34:25.346773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.496 [2024-12-07 16:34:25.346796] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.496 [2024-12-07 16:34:25.348898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.496 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.755 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.755 "name": "Existed_Raid", 00:08:26.755 "uuid": "6f0ff5f3-c179-47dd-bf3c-afc62a63ad1f", 00:08:26.755 "strip_size_kb": 64, 00:08:26.755 "state": "configuring", 00:08:26.755 "raid_level": "raid0", 00:08:26.755 "superblock": true, 00:08:26.755 "num_base_bdevs": 3, 00:08:26.755 "num_base_bdevs_discovered": 2, 00:08:26.755 "num_base_bdevs_operational": 3, 00:08:26.755 "base_bdevs_list": [ 00:08:26.755 { 00:08:26.755 "name": "BaseBdev1", 00:08:26.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.755 "is_configured": false, 00:08:26.755 "data_offset": 0, 00:08:26.755 "data_size": 0 00:08:26.755 }, 00:08:26.755 { 00:08:26.755 "name": "BaseBdev2", 00:08:26.755 "uuid": "e27f114d-0de5-4802-b935-4ee97778f238", 00:08:26.755 "is_configured": true, 00:08:26.755 "data_offset": 2048, 00:08:26.755 "data_size": 63488 00:08:26.755 }, 00:08:26.755 { 00:08:26.755 "name": "BaseBdev3", 00:08:26.755 "uuid": "063b62ca-6721-40b0-be99-1744210bc073", 00:08:26.755 "is_configured": true, 00:08:26.755 "data_offset": 2048, 00:08:26.755 "data_size": 63488 00:08:26.755 } 00:08:26.755 ] 00:08:26.755 }' 00:08:26.755 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.755 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.015 [2024-12-07 16:34:25.805894] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.015 "name": "Existed_Raid", 00:08:27.015 "uuid": "6f0ff5f3-c179-47dd-bf3c-afc62a63ad1f", 00:08:27.015 "strip_size_kb": 64, 00:08:27.015 "state": "configuring", 00:08:27.015 "raid_level": "raid0", 00:08:27.015 "superblock": true, 00:08:27.015 "num_base_bdevs": 3, 00:08:27.015 "num_base_bdevs_discovered": 1, 00:08:27.015 "num_base_bdevs_operational": 3, 00:08:27.015 "base_bdevs_list": [ 00:08:27.015 { 00:08:27.015 "name": "BaseBdev1", 00:08:27.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.015 "is_configured": false, 00:08:27.015 "data_offset": 0, 00:08:27.015 "data_size": 0 00:08:27.015 }, 00:08:27.015 { 00:08:27.015 "name": null, 00:08:27.015 "uuid": "e27f114d-0de5-4802-b935-4ee97778f238", 00:08:27.015 "is_configured": false, 00:08:27.015 "data_offset": 0, 00:08:27.015 "data_size": 63488 00:08:27.015 }, 00:08:27.015 { 00:08:27.015 "name": "BaseBdev3", 00:08:27.015 "uuid": "063b62ca-6721-40b0-be99-1744210bc073", 00:08:27.015 "is_configured": true, 00:08:27.015 "data_offset": 2048, 00:08:27.015 "data_size": 63488 00:08:27.015 } 00:08:27.015 ] 00:08:27.015 }' 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.015 16:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.585 [2024-12-07 16:34:26.266232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.585 BaseBdev1 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.585 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.585 [ 00:08:27.585 { 00:08:27.585 "name": "BaseBdev1", 00:08:27.585 "aliases": [ 00:08:27.586 "e254e53d-5bbc-4f5a-a360-33b0d728d399" 00:08:27.586 ], 00:08:27.586 "product_name": "Malloc disk", 00:08:27.586 "block_size": 512, 00:08:27.586 "num_blocks": 65536, 00:08:27.586 "uuid": "e254e53d-5bbc-4f5a-a360-33b0d728d399", 00:08:27.586 "assigned_rate_limits": { 00:08:27.586 "rw_ios_per_sec": 0, 00:08:27.586 "rw_mbytes_per_sec": 0, 00:08:27.586 "r_mbytes_per_sec": 0, 00:08:27.586 "w_mbytes_per_sec": 0 00:08:27.586 }, 00:08:27.586 "claimed": true, 00:08:27.586 "claim_type": "exclusive_write", 00:08:27.586 "zoned": false, 00:08:27.586 "supported_io_types": { 00:08:27.586 "read": true, 00:08:27.586 "write": true, 00:08:27.586 "unmap": true, 00:08:27.586 "flush": true, 00:08:27.586 "reset": true, 00:08:27.586 "nvme_admin": false, 00:08:27.586 "nvme_io": false, 00:08:27.586 "nvme_io_md": false, 00:08:27.586 "write_zeroes": true, 00:08:27.586 "zcopy": true, 00:08:27.586 "get_zone_info": false, 00:08:27.586 "zone_management": false, 00:08:27.586 "zone_append": false, 00:08:27.586 "compare": false, 00:08:27.586 "compare_and_write": false, 00:08:27.586 "abort": true, 00:08:27.586 "seek_hole": false, 00:08:27.586 "seek_data": false, 00:08:27.586 "copy": true, 00:08:27.586 "nvme_iov_md": false 00:08:27.586 }, 00:08:27.586 "memory_domains": [ 00:08:27.586 { 00:08:27.586 "dma_device_id": "system", 00:08:27.586 "dma_device_type": 1 00:08:27.586 }, 00:08:27.586 { 00:08:27.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.586 "dma_device_type": 2 00:08:27.586 } 00:08:27.586 ], 00:08:27.586 "driver_specific": {} 00:08:27.586 } 00:08:27.586 ] 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.586 "name": "Existed_Raid", 00:08:27.586 "uuid": "6f0ff5f3-c179-47dd-bf3c-afc62a63ad1f", 00:08:27.586 "strip_size_kb": 64, 00:08:27.586 "state": "configuring", 00:08:27.586 "raid_level": "raid0", 00:08:27.586 "superblock": true, 00:08:27.586 "num_base_bdevs": 3, 00:08:27.586 "num_base_bdevs_discovered": 2, 00:08:27.586 "num_base_bdevs_operational": 3, 00:08:27.586 "base_bdevs_list": [ 00:08:27.586 { 00:08:27.586 "name": "BaseBdev1", 00:08:27.586 "uuid": "e254e53d-5bbc-4f5a-a360-33b0d728d399", 00:08:27.586 "is_configured": true, 00:08:27.586 "data_offset": 2048, 00:08:27.586 "data_size": 63488 00:08:27.586 }, 00:08:27.586 { 00:08:27.586 "name": null, 00:08:27.586 "uuid": "e27f114d-0de5-4802-b935-4ee97778f238", 00:08:27.586 "is_configured": false, 00:08:27.586 "data_offset": 0, 00:08:27.586 "data_size": 63488 00:08:27.586 }, 00:08:27.586 { 00:08:27.586 "name": "BaseBdev3", 00:08:27.586 "uuid": "063b62ca-6721-40b0-be99-1744210bc073", 00:08:27.586 "is_configured": true, 00:08:27.586 "data_offset": 2048, 00:08:27.586 "data_size": 63488 00:08:27.586 } 00:08:27.586 ] 00:08:27.586 }' 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.586 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.156 [2024-12-07 16:34:26.797409] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.156 "name": "Existed_Raid", 00:08:28.156 "uuid": "6f0ff5f3-c179-47dd-bf3c-afc62a63ad1f", 00:08:28.156 "strip_size_kb": 64, 00:08:28.156 "state": "configuring", 00:08:28.156 "raid_level": "raid0", 00:08:28.156 "superblock": true, 00:08:28.156 "num_base_bdevs": 3, 00:08:28.156 "num_base_bdevs_discovered": 1, 00:08:28.156 "num_base_bdevs_operational": 3, 00:08:28.156 "base_bdevs_list": [ 00:08:28.156 { 00:08:28.156 "name": "BaseBdev1", 00:08:28.156 "uuid": "e254e53d-5bbc-4f5a-a360-33b0d728d399", 00:08:28.156 "is_configured": true, 00:08:28.156 "data_offset": 2048, 00:08:28.156 "data_size": 63488 00:08:28.156 }, 00:08:28.156 { 00:08:28.156 "name": null, 00:08:28.156 "uuid": "e27f114d-0de5-4802-b935-4ee97778f238", 00:08:28.156 "is_configured": false, 00:08:28.156 "data_offset": 0, 00:08:28.156 "data_size": 63488 00:08:28.156 }, 00:08:28.156 { 00:08:28.156 "name": null, 00:08:28.156 "uuid": "063b62ca-6721-40b0-be99-1744210bc073", 00:08:28.156 "is_configured": false, 00:08:28.156 "data_offset": 0, 00:08:28.156 "data_size": 63488 00:08:28.156 } 00:08:28.156 ] 00:08:28.156 }' 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.156 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.416 [2024-12-07 16:34:27.300565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.416 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.676 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.676 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.676 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.676 "name": "Existed_Raid", 00:08:28.676 "uuid": "6f0ff5f3-c179-47dd-bf3c-afc62a63ad1f", 00:08:28.676 "strip_size_kb": 64, 00:08:28.676 "state": "configuring", 00:08:28.676 "raid_level": "raid0", 00:08:28.676 "superblock": true, 00:08:28.676 "num_base_bdevs": 3, 00:08:28.676 "num_base_bdevs_discovered": 2, 00:08:28.676 "num_base_bdevs_operational": 3, 00:08:28.676 "base_bdevs_list": [ 00:08:28.676 { 00:08:28.676 "name": "BaseBdev1", 00:08:28.676 "uuid": "e254e53d-5bbc-4f5a-a360-33b0d728d399", 00:08:28.676 "is_configured": true, 00:08:28.676 "data_offset": 2048, 00:08:28.676 "data_size": 63488 00:08:28.676 }, 00:08:28.676 { 00:08:28.676 "name": null, 00:08:28.676 "uuid": "e27f114d-0de5-4802-b935-4ee97778f238", 00:08:28.676 "is_configured": false, 00:08:28.676 "data_offset": 0, 00:08:28.676 "data_size": 63488 00:08:28.676 }, 00:08:28.676 { 00:08:28.676 "name": "BaseBdev3", 00:08:28.676 "uuid": "063b62ca-6721-40b0-be99-1744210bc073", 00:08:28.676 "is_configured": true, 00:08:28.676 "data_offset": 2048, 00:08:28.676 "data_size": 63488 00:08:28.676 } 00:08:28.676 ] 00:08:28.676 }' 00:08:28.676 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.676 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.937 [2024-12-07 16:34:27.807702] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.937 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.197 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.197 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.197 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.197 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.197 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.197 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.197 "name": "Existed_Raid", 00:08:29.197 "uuid": "6f0ff5f3-c179-47dd-bf3c-afc62a63ad1f", 00:08:29.197 "strip_size_kb": 64, 00:08:29.197 "state": "configuring", 00:08:29.197 "raid_level": "raid0", 00:08:29.197 "superblock": true, 00:08:29.197 "num_base_bdevs": 3, 00:08:29.197 "num_base_bdevs_discovered": 1, 00:08:29.197 "num_base_bdevs_operational": 3, 00:08:29.197 "base_bdevs_list": [ 00:08:29.197 { 00:08:29.197 "name": null, 00:08:29.197 "uuid": "e254e53d-5bbc-4f5a-a360-33b0d728d399", 00:08:29.197 "is_configured": false, 00:08:29.197 "data_offset": 0, 00:08:29.197 "data_size": 63488 00:08:29.197 }, 00:08:29.197 { 00:08:29.197 "name": null, 00:08:29.197 "uuid": "e27f114d-0de5-4802-b935-4ee97778f238", 00:08:29.197 "is_configured": false, 00:08:29.197 "data_offset": 0, 00:08:29.197 "data_size": 63488 00:08:29.197 }, 00:08:29.197 { 00:08:29.197 "name": "BaseBdev3", 00:08:29.197 "uuid": "063b62ca-6721-40b0-be99-1744210bc073", 00:08:29.197 "is_configured": true, 00:08:29.197 "data_offset": 2048, 00:08:29.197 "data_size": 63488 00:08:29.197 } 00:08:29.197 ] 00:08:29.197 }' 00:08:29.197 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.197 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.457 [2024-12-07 16:34:28.315074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.457 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.716 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.716 "name": "Existed_Raid", 00:08:29.716 "uuid": "6f0ff5f3-c179-47dd-bf3c-afc62a63ad1f", 00:08:29.716 "strip_size_kb": 64, 00:08:29.716 "state": "configuring", 00:08:29.716 "raid_level": "raid0", 00:08:29.716 "superblock": true, 00:08:29.716 "num_base_bdevs": 3, 00:08:29.716 "num_base_bdevs_discovered": 2, 00:08:29.716 "num_base_bdevs_operational": 3, 00:08:29.717 "base_bdevs_list": [ 00:08:29.717 { 00:08:29.717 "name": null, 00:08:29.717 "uuid": "e254e53d-5bbc-4f5a-a360-33b0d728d399", 00:08:29.717 "is_configured": false, 00:08:29.717 "data_offset": 0, 00:08:29.717 "data_size": 63488 00:08:29.717 }, 00:08:29.717 { 00:08:29.717 "name": "BaseBdev2", 00:08:29.717 "uuid": "e27f114d-0de5-4802-b935-4ee97778f238", 00:08:29.717 "is_configured": true, 00:08:29.717 "data_offset": 2048, 00:08:29.717 "data_size": 63488 00:08:29.717 }, 00:08:29.717 { 00:08:29.717 "name": "BaseBdev3", 00:08:29.717 "uuid": "063b62ca-6721-40b0-be99-1744210bc073", 00:08:29.717 "is_configured": true, 00:08:29.717 "data_offset": 2048, 00:08:29.717 "data_size": 63488 00:08:29.717 } 00:08:29.717 ] 00:08:29.717 }' 00:08:29.717 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.717 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e254e53d-5bbc-4f5a-a360-33b0d728d399 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.977 [2024-12-07 16:34:28.855113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:29.977 [2024-12-07 16:34:28.855303] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:29.977 [2024-12-07 16:34:28.855322] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:29.977 [2024-12-07 16:34:28.855619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:29.977 [2024-12-07 16:34:28.855749] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:29.977 [2024-12-07 16:34:28.855759] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:29.977 NewBaseBdev 00:08:29.977 [2024-12-07 16:34:28.855869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.977 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.239 [ 00:08:30.239 { 00:08:30.239 "name": "NewBaseBdev", 00:08:30.239 "aliases": [ 00:08:30.239 "e254e53d-5bbc-4f5a-a360-33b0d728d399" 00:08:30.239 ], 00:08:30.239 "product_name": "Malloc disk", 00:08:30.239 "block_size": 512, 00:08:30.239 "num_blocks": 65536, 00:08:30.239 "uuid": "e254e53d-5bbc-4f5a-a360-33b0d728d399", 00:08:30.239 "assigned_rate_limits": { 00:08:30.239 "rw_ios_per_sec": 0, 00:08:30.239 "rw_mbytes_per_sec": 0, 00:08:30.239 "r_mbytes_per_sec": 0, 00:08:30.239 "w_mbytes_per_sec": 0 00:08:30.239 }, 00:08:30.239 "claimed": true, 00:08:30.239 "claim_type": "exclusive_write", 00:08:30.239 "zoned": false, 00:08:30.239 "supported_io_types": { 00:08:30.239 "read": true, 00:08:30.239 "write": true, 00:08:30.239 "unmap": true, 00:08:30.239 "flush": true, 00:08:30.239 "reset": true, 00:08:30.239 "nvme_admin": false, 00:08:30.239 "nvme_io": false, 00:08:30.239 "nvme_io_md": false, 00:08:30.239 "write_zeroes": true, 00:08:30.239 "zcopy": true, 00:08:30.239 "get_zone_info": false, 00:08:30.239 "zone_management": false, 00:08:30.239 "zone_append": false, 00:08:30.239 "compare": false, 00:08:30.239 "compare_and_write": false, 00:08:30.240 "abort": true, 00:08:30.240 "seek_hole": false, 00:08:30.240 "seek_data": false, 00:08:30.240 "copy": true, 00:08:30.240 "nvme_iov_md": false 00:08:30.240 }, 00:08:30.240 "memory_domains": [ 00:08:30.240 { 00:08:30.240 "dma_device_id": "system", 00:08:30.240 "dma_device_type": 1 00:08:30.240 }, 00:08:30.240 { 00:08:30.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.240 "dma_device_type": 2 00:08:30.240 } 00:08:30.240 ], 00:08:30.240 "driver_specific": {} 00:08:30.240 } 00:08:30.240 ] 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.240 "name": "Existed_Raid", 00:08:30.240 "uuid": "6f0ff5f3-c179-47dd-bf3c-afc62a63ad1f", 00:08:30.240 "strip_size_kb": 64, 00:08:30.240 "state": "online", 00:08:30.240 "raid_level": "raid0", 00:08:30.240 "superblock": true, 00:08:30.240 "num_base_bdevs": 3, 00:08:30.240 "num_base_bdevs_discovered": 3, 00:08:30.240 "num_base_bdevs_operational": 3, 00:08:30.240 "base_bdevs_list": [ 00:08:30.240 { 00:08:30.240 "name": "NewBaseBdev", 00:08:30.240 "uuid": "e254e53d-5bbc-4f5a-a360-33b0d728d399", 00:08:30.240 "is_configured": true, 00:08:30.240 "data_offset": 2048, 00:08:30.240 "data_size": 63488 00:08:30.240 }, 00:08:30.240 { 00:08:30.240 "name": "BaseBdev2", 00:08:30.240 "uuid": "e27f114d-0de5-4802-b935-4ee97778f238", 00:08:30.240 "is_configured": true, 00:08:30.240 "data_offset": 2048, 00:08:30.240 "data_size": 63488 00:08:30.240 }, 00:08:30.240 { 00:08:30.240 "name": "BaseBdev3", 00:08:30.240 "uuid": "063b62ca-6721-40b0-be99-1744210bc073", 00:08:30.240 "is_configured": true, 00:08:30.240 "data_offset": 2048, 00:08:30.240 "data_size": 63488 00:08:30.240 } 00:08:30.240 ] 00:08:30.240 }' 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.240 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.515 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:30.515 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:30.515 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.515 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.515 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.515 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.516 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:30.516 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.516 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.516 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.516 [2024-12-07 16:34:29.314738] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.516 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.516 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.516 "name": "Existed_Raid", 00:08:30.516 "aliases": [ 00:08:30.516 "6f0ff5f3-c179-47dd-bf3c-afc62a63ad1f" 00:08:30.516 ], 00:08:30.516 "product_name": "Raid Volume", 00:08:30.516 "block_size": 512, 00:08:30.516 "num_blocks": 190464, 00:08:30.516 "uuid": "6f0ff5f3-c179-47dd-bf3c-afc62a63ad1f", 00:08:30.516 "assigned_rate_limits": { 00:08:30.516 "rw_ios_per_sec": 0, 00:08:30.516 "rw_mbytes_per_sec": 0, 00:08:30.516 "r_mbytes_per_sec": 0, 00:08:30.516 "w_mbytes_per_sec": 0 00:08:30.516 }, 00:08:30.516 "claimed": false, 00:08:30.516 "zoned": false, 00:08:30.516 "supported_io_types": { 00:08:30.516 "read": true, 00:08:30.516 "write": true, 00:08:30.516 "unmap": true, 00:08:30.516 "flush": true, 00:08:30.516 "reset": true, 00:08:30.516 "nvme_admin": false, 00:08:30.516 "nvme_io": false, 00:08:30.516 "nvme_io_md": false, 00:08:30.516 "write_zeroes": true, 00:08:30.516 "zcopy": false, 00:08:30.516 "get_zone_info": false, 00:08:30.516 "zone_management": false, 00:08:30.516 "zone_append": false, 00:08:30.516 "compare": false, 00:08:30.516 "compare_and_write": false, 00:08:30.516 "abort": false, 00:08:30.516 "seek_hole": false, 00:08:30.516 "seek_data": false, 00:08:30.516 "copy": false, 00:08:30.516 "nvme_iov_md": false 00:08:30.516 }, 00:08:30.516 "memory_domains": [ 00:08:30.516 { 00:08:30.516 "dma_device_id": "system", 00:08:30.516 "dma_device_type": 1 00:08:30.516 }, 00:08:30.516 { 00:08:30.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.516 "dma_device_type": 2 00:08:30.516 }, 00:08:30.516 { 00:08:30.516 "dma_device_id": "system", 00:08:30.516 "dma_device_type": 1 00:08:30.516 }, 00:08:30.516 { 00:08:30.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.516 "dma_device_type": 2 00:08:30.516 }, 00:08:30.516 { 00:08:30.516 "dma_device_id": "system", 00:08:30.516 "dma_device_type": 1 00:08:30.516 }, 00:08:30.516 { 00:08:30.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.516 "dma_device_type": 2 00:08:30.516 } 00:08:30.516 ], 00:08:30.516 "driver_specific": { 00:08:30.516 "raid": { 00:08:30.516 "uuid": "6f0ff5f3-c179-47dd-bf3c-afc62a63ad1f", 00:08:30.516 "strip_size_kb": 64, 00:08:30.516 "state": "online", 00:08:30.516 "raid_level": "raid0", 00:08:30.516 "superblock": true, 00:08:30.516 "num_base_bdevs": 3, 00:08:30.516 "num_base_bdevs_discovered": 3, 00:08:30.516 "num_base_bdevs_operational": 3, 00:08:30.516 "base_bdevs_list": [ 00:08:30.516 { 00:08:30.516 "name": "NewBaseBdev", 00:08:30.516 "uuid": "e254e53d-5bbc-4f5a-a360-33b0d728d399", 00:08:30.516 "is_configured": true, 00:08:30.516 "data_offset": 2048, 00:08:30.516 "data_size": 63488 00:08:30.516 }, 00:08:30.516 { 00:08:30.516 "name": "BaseBdev2", 00:08:30.516 "uuid": "e27f114d-0de5-4802-b935-4ee97778f238", 00:08:30.516 "is_configured": true, 00:08:30.516 "data_offset": 2048, 00:08:30.516 "data_size": 63488 00:08:30.516 }, 00:08:30.516 { 00:08:30.516 "name": "BaseBdev3", 00:08:30.516 "uuid": "063b62ca-6721-40b0-be99-1744210bc073", 00:08:30.516 "is_configured": true, 00:08:30.516 "data_offset": 2048, 00:08:30.516 "data_size": 63488 00:08:30.516 } 00:08:30.516 ] 00:08:30.516 } 00:08:30.516 } 00:08:30.516 }' 00:08:30.516 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:30.794 BaseBdev2 00:08:30.794 BaseBdev3' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.794 [2024-12-07 16:34:29.593911] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.794 [2024-12-07 16:34:29.593941] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.794 [2024-12-07 16:34:29.594024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.794 [2024-12-07 16:34:29.594086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.794 [2024-12-07 16:34:29.594100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75900 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75900 ']' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75900 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75900 00:08:30.794 killing process with pid 75900 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75900' 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75900 00:08:30.794 [2024-12-07 16:34:29.637603] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.794 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75900 00:08:31.054 [2024-12-07 16:34:29.696242] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.315 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:31.315 00:08:31.315 real 0m9.022s 00:08:31.315 user 0m15.093s 00:08:31.315 sys 0m1.880s 00:08:31.315 16:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.315 ************************************ 00:08:31.315 END TEST raid_state_function_test_sb 00:08:31.315 ************************************ 00:08:31.315 16:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.315 16:34:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:31.315 16:34:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:31.315 16:34:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.315 16:34:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.315 ************************************ 00:08:31.315 START TEST raid_superblock_test 00:08:31.315 ************************************ 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76504 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76504 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76504 ']' 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.315 16:34:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.576 [2024-12-07 16:34:30.232226] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:31.576 [2024-12-07 16:34:30.232384] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76504 ] 00:08:31.576 [2024-12-07 16:34:30.395938] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.576 [2024-12-07 16:34:30.464693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.836 [2024-12-07 16:34:30.543349] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.836 [2024-12-07 16:34:30.543408] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.405 malloc1 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.405 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.405 [2024-12-07 16:34:31.094584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:32.405 [2024-12-07 16:34:31.094751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.405 [2024-12-07 16:34:31.094797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:32.405 [2024-12-07 16:34:31.094841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.405 [2024-12-07 16:34:31.097302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.405 [2024-12-07 16:34:31.097392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:32.405 pt1 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.406 malloc2 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.406 [2024-12-07 16:34:31.151860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:32.406 [2024-12-07 16:34:31.151976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.406 [2024-12-07 16:34:31.152015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:32.406 [2024-12-07 16:34:31.152043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.406 [2024-12-07 16:34:31.156514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.406 pt2 00:08:32.406 [2024-12-07 16:34:31.156644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.406 malloc3 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.406 [2024-12-07 16:34:31.187863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:32.406 [2024-12-07 16:34:31.187972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.406 [2024-12-07 16:34:31.188020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:32.406 [2024-12-07 16:34:31.188055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.406 [2024-12-07 16:34:31.190358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.406 [2024-12-07 16:34:31.190422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:32.406 pt3 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.406 [2024-12-07 16:34:31.199899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:32.406 [2024-12-07 16:34:31.201996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.406 [2024-12-07 16:34:31.202093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:32.406 [2024-12-07 16:34:31.202260] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:32.406 [2024-12-07 16:34:31.202299] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:32.406 [2024-12-07 16:34:31.202611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:32.406 [2024-12-07 16:34:31.202787] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:32.406 [2024-12-07 16:34:31.202832] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:32.406 [2024-12-07 16:34:31.203013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.406 "name": "raid_bdev1", 00:08:32.406 "uuid": "67de80d8-c8d4-4c58-9b8b-dadb1de3b109", 00:08:32.406 "strip_size_kb": 64, 00:08:32.406 "state": "online", 00:08:32.406 "raid_level": "raid0", 00:08:32.406 "superblock": true, 00:08:32.406 "num_base_bdevs": 3, 00:08:32.406 "num_base_bdevs_discovered": 3, 00:08:32.406 "num_base_bdevs_operational": 3, 00:08:32.406 "base_bdevs_list": [ 00:08:32.406 { 00:08:32.406 "name": "pt1", 00:08:32.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.406 "is_configured": true, 00:08:32.406 "data_offset": 2048, 00:08:32.406 "data_size": 63488 00:08:32.406 }, 00:08:32.406 { 00:08:32.406 "name": "pt2", 00:08:32.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.406 "is_configured": true, 00:08:32.406 "data_offset": 2048, 00:08:32.406 "data_size": 63488 00:08:32.406 }, 00:08:32.406 { 00:08:32.406 "name": "pt3", 00:08:32.406 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:32.406 "is_configured": true, 00:08:32.406 "data_offset": 2048, 00:08:32.406 "data_size": 63488 00:08:32.406 } 00:08:32.406 ] 00:08:32.406 }' 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.406 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.976 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:32.976 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:32.976 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.976 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.976 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.976 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.976 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:32.976 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.976 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.976 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.976 [2024-12-07 16:34:31.595539] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.976 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.976 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.976 "name": "raid_bdev1", 00:08:32.976 "aliases": [ 00:08:32.976 "67de80d8-c8d4-4c58-9b8b-dadb1de3b109" 00:08:32.976 ], 00:08:32.976 "product_name": "Raid Volume", 00:08:32.976 "block_size": 512, 00:08:32.976 "num_blocks": 190464, 00:08:32.976 "uuid": "67de80d8-c8d4-4c58-9b8b-dadb1de3b109", 00:08:32.976 "assigned_rate_limits": { 00:08:32.976 "rw_ios_per_sec": 0, 00:08:32.976 "rw_mbytes_per_sec": 0, 00:08:32.976 "r_mbytes_per_sec": 0, 00:08:32.976 "w_mbytes_per_sec": 0 00:08:32.976 }, 00:08:32.976 "claimed": false, 00:08:32.976 "zoned": false, 00:08:32.976 "supported_io_types": { 00:08:32.976 "read": true, 00:08:32.976 "write": true, 00:08:32.976 "unmap": true, 00:08:32.976 "flush": true, 00:08:32.976 "reset": true, 00:08:32.976 "nvme_admin": false, 00:08:32.976 "nvme_io": false, 00:08:32.976 "nvme_io_md": false, 00:08:32.976 "write_zeroes": true, 00:08:32.976 "zcopy": false, 00:08:32.976 "get_zone_info": false, 00:08:32.976 "zone_management": false, 00:08:32.976 "zone_append": false, 00:08:32.976 "compare": false, 00:08:32.976 "compare_and_write": false, 00:08:32.976 "abort": false, 00:08:32.976 "seek_hole": false, 00:08:32.976 "seek_data": false, 00:08:32.976 "copy": false, 00:08:32.976 "nvme_iov_md": false 00:08:32.976 }, 00:08:32.976 "memory_domains": [ 00:08:32.976 { 00:08:32.976 "dma_device_id": "system", 00:08:32.976 "dma_device_type": 1 00:08:32.976 }, 00:08:32.976 { 00:08:32.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.976 "dma_device_type": 2 00:08:32.976 }, 00:08:32.976 { 00:08:32.976 "dma_device_id": "system", 00:08:32.976 "dma_device_type": 1 00:08:32.976 }, 00:08:32.976 { 00:08:32.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.976 "dma_device_type": 2 00:08:32.976 }, 00:08:32.976 { 00:08:32.976 "dma_device_id": "system", 00:08:32.976 "dma_device_type": 1 00:08:32.976 }, 00:08:32.976 { 00:08:32.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.976 "dma_device_type": 2 00:08:32.976 } 00:08:32.976 ], 00:08:32.976 "driver_specific": { 00:08:32.976 "raid": { 00:08:32.976 "uuid": "67de80d8-c8d4-4c58-9b8b-dadb1de3b109", 00:08:32.976 "strip_size_kb": 64, 00:08:32.976 "state": "online", 00:08:32.976 "raid_level": "raid0", 00:08:32.976 "superblock": true, 00:08:32.976 "num_base_bdevs": 3, 00:08:32.976 "num_base_bdevs_discovered": 3, 00:08:32.976 "num_base_bdevs_operational": 3, 00:08:32.976 "base_bdevs_list": [ 00:08:32.976 { 00:08:32.976 "name": "pt1", 00:08:32.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.976 "is_configured": true, 00:08:32.976 "data_offset": 2048, 00:08:32.976 "data_size": 63488 00:08:32.976 }, 00:08:32.977 { 00:08:32.977 "name": "pt2", 00:08:32.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.977 "is_configured": true, 00:08:32.977 "data_offset": 2048, 00:08:32.977 "data_size": 63488 00:08:32.977 }, 00:08:32.977 { 00:08:32.977 "name": "pt3", 00:08:32.977 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:32.977 "is_configured": true, 00:08:32.977 "data_offset": 2048, 00:08:32.977 "data_size": 63488 00:08:32.977 } 00:08:32.977 ] 00:08:32.977 } 00:08:32.977 } 00:08:32.977 }' 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:32.977 pt2 00:08:32.977 pt3' 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:32.977 [2024-12-07 16:34:31.851080] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.977 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=67de80d8-c8d4-4c58-9b8b-dadb1de3b109 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 67de80d8-c8d4-4c58-9b8b-dadb1de3b109 ']' 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.237 [2024-12-07 16:34:31.886713] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.237 [2024-12-07 16:34:31.886782] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.237 [2024-12-07 16:34:31.886873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.237 [2024-12-07 16:34:31.886966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.237 [2024-12-07 16:34:31.886981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:33.237 16:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.237 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:33.237 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:33.237 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:33.237 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:33.237 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:33.237 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.237 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:33.237 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.237 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:33.237 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.237 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.237 [2024-12-07 16:34:32.034499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:33.237 [2024-12-07 16:34:32.036686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:33.237 [2024-12-07 16:34:32.036729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:33.237 [2024-12-07 16:34:32.036781] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:33.237 [2024-12-07 16:34:32.036828] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:33.237 [2024-12-07 16:34:32.036849] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:33.237 [2024-12-07 16:34:32.036860] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.237 [2024-12-07 16:34:32.036871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:33.237 request: 00:08:33.237 { 00:08:33.237 "name": "raid_bdev1", 00:08:33.237 "raid_level": "raid0", 00:08:33.237 "base_bdevs": [ 00:08:33.237 "malloc1", 00:08:33.237 "malloc2", 00:08:33.238 "malloc3" 00:08:33.238 ], 00:08:33.238 "strip_size_kb": 64, 00:08:33.238 "superblock": false, 00:08:33.238 "method": "bdev_raid_create", 00:08:33.238 "req_id": 1 00:08:33.238 } 00:08:33.238 Got JSON-RPC error response 00:08:33.238 response: 00:08:33.238 { 00:08:33.238 "code": -17, 00:08:33.238 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:33.238 } 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.238 [2024-12-07 16:34:32.098454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:33.238 [2024-12-07 16:34:32.098539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.238 [2024-12-07 16:34:32.098571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:33.238 [2024-12-07 16:34:32.098601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.238 [2024-12-07 16:34:32.101020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.238 [2024-12-07 16:34:32.101085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:33.238 [2024-12-07 16:34:32.101169] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:33.238 [2024-12-07 16:34:32.101231] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:33.238 pt1 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.238 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.498 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.498 "name": "raid_bdev1", 00:08:33.498 "uuid": "67de80d8-c8d4-4c58-9b8b-dadb1de3b109", 00:08:33.498 "strip_size_kb": 64, 00:08:33.498 "state": "configuring", 00:08:33.498 "raid_level": "raid0", 00:08:33.498 "superblock": true, 00:08:33.498 "num_base_bdevs": 3, 00:08:33.498 "num_base_bdevs_discovered": 1, 00:08:33.498 "num_base_bdevs_operational": 3, 00:08:33.498 "base_bdevs_list": [ 00:08:33.498 { 00:08:33.498 "name": "pt1", 00:08:33.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.498 "is_configured": true, 00:08:33.498 "data_offset": 2048, 00:08:33.498 "data_size": 63488 00:08:33.498 }, 00:08:33.498 { 00:08:33.498 "name": null, 00:08:33.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.498 "is_configured": false, 00:08:33.498 "data_offset": 2048, 00:08:33.498 "data_size": 63488 00:08:33.498 }, 00:08:33.498 { 00:08:33.498 "name": null, 00:08:33.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.498 "is_configured": false, 00:08:33.498 "data_offset": 2048, 00:08:33.498 "data_size": 63488 00:08:33.498 } 00:08:33.498 ] 00:08:33.498 }' 00:08:33.498 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.498 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.758 [2024-12-07 16:34:32.505790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:33.758 [2024-12-07 16:34:32.505867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.758 [2024-12-07 16:34:32.505889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:33.758 [2024-12-07 16:34:32.505906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.758 [2024-12-07 16:34:32.506389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.758 [2024-12-07 16:34:32.506412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:33.758 [2024-12-07 16:34:32.506497] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:33.758 [2024-12-07 16:34:32.506526] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:33.758 pt2 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.758 [2024-12-07 16:34:32.513769] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.758 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.759 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.759 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.759 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.759 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.759 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.759 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.759 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.759 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.759 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.759 "name": "raid_bdev1", 00:08:33.759 "uuid": "67de80d8-c8d4-4c58-9b8b-dadb1de3b109", 00:08:33.759 "strip_size_kb": 64, 00:08:33.759 "state": "configuring", 00:08:33.759 "raid_level": "raid0", 00:08:33.759 "superblock": true, 00:08:33.759 "num_base_bdevs": 3, 00:08:33.759 "num_base_bdevs_discovered": 1, 00:08:33.759 "num_base_bdevs_operational": 3, 00:08:33.759 "base_bdevs_list": [ 00:08:33.759 { 00:08:33.759 "name": "pt1", 00:08:33.759 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.759 "is_configured": true, 00:08:33.759 "data_offset": 2048, 00:08:33.759 "data_size": 63488 00:08:33.759 }, 00:08:33.759 { 00:08:33.759 "name": null, 00:08:33.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.759 "is_configured": false, 00:08:33.759 "data_offset": 0, 00:08:33.759 "data_size": 63488 00:08:33.759 }, 00:08:33.759 { 00:08:33.759 "name": null, 00:08:33.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.759 "is_configured": false, 00:08:33.759 "data_offset": 2048, 00:08:33.759 "data_size": 63488 00:08:33.759 } 00:08:33.759 ] 00:08:33.759 }' 00:08:33.759 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.759 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.019 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:34.019 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:34.019 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:34.019 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.019 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.019 [2024-12-07 16:34:32.893175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:34.019 [2024-12-07 16:34:32.893364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.019 [2024-12-07 16:34:32.893405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:34.019 [2024-12-07 16:34:32.893439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.020 [2024-12-07 16:34:32.893961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.020 [2024-12-07 16:34:32.894016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:34.020 [2024-12-07 16:34:32.894142] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:34.020 [2024-12-07 16:34:32.894194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:34.020 pt2 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.020 [2024-12-07 16:34:32.905085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:34.020 [2024-12-07 16:34:32.905170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.020 [2024-12-07 16:34:32.905206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:34.020 [2024-12-07 16:34:32.905229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.020 [2024-12-07 16:34:32.905642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.020 [2024-12-07 16:34:32.905695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:34.020 [2024-12-07 16:34:32.905789] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:34.020 [2024-12-07 16:34:32.905835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:34.020 [2024-12-07 16:34:32.905958] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:34.020 [2024-12-07 16:34:32.905992] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:34.020 [2024-12-07 16:34:32.906255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:34.020 [2024-12-07 16:34:32.906425] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:34.020 [2024-12-07 16:34:32.906465] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:34.020 [2024-12-07 16:34:32.906612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.020 pt3 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.020 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.280 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.280 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.280 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.280 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.280 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.280 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.280 "name": "raid_bdev1", 00:08:34.280 "uuid": "67de80d8-c8d4-4c58-9b8b-dadb1de3b109", 00:08:34.280 "strip_size_kb": 64, 00:08:34.280 "state": "online", 00:08:34.280 "raid_level": "raid0", 00:08:34.280 "superblock": true, 00:08:34.280 "num_base_bdevs": 3, 00:08:34.280 "num_base_bdevs_discovered": 3, 00:08:34.280 "num_base_bdevs_operational": 3, 00:08:34.280 "base_bdevs_list": [ 00:08:34.280 { 00:08:34.280 "name": "pt1", 00:08:34.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.280 "is_configured": true, 00:08:34.280 "data_offset": 2048, 00:08:34.280 "data_size": 63488 00:08:34.280 }, 00:08:34.280 { 00:08:34.280 "name": "pt2", 00:08:34.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.280 "is_configured": true, 00:08:34.280 "data_offset": 2048, 00:08:34.280 "data_size": 63488 00:08:34.280 }, 00:08:34.280 { 00:08:34.280 "name": "pt3", 00:08:34.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:34.280 "is_configured": true, 00:08:34.280 "data_offset": 2048, 00:08:34.280 "data_size": 63488 00:08:34.280 } 00:08:34.280 ] 00:08:34.280 }' 00:08:34.280 16:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.280 16:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.548 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:34.548 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:34.548 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.548 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.548 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.548 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.548 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:34.548 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.548 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.548 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.548 [2024-12-07 16:34:33.328772] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.549 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.549 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.549 "name": "raid_bdev1", 00:08:34.549 "aliases": [ 00:08:34.549 "67de80d8-c8d4-4c58-9b8b-dadb1de3b109" 00:08:34.549 ], 00:08:34.549 "product_name": "Raid Volume", 00:08:34.549 "block_size": 512, 00:08:34.549 "num_blocks": 190464, 00:08:34.549 "uuid": "67de80d8-c8d4-4c58-9b8b-dadb1de3b109", 00:08:34.549 "assigned_rate_limits": { 00:08:34.549 "rw_ios_per_sec": 0, 00:08:34.549 "rw_mbytes_per_sec": 0, 00:08:34.549 "r_mbytes_per_sec": 0, 00:08:34.549 "w_mbytes_per_sec": 0 00:08:34.549 }, 00:08:34.549 "claimed": false, 00:08:34.549 "zoned": false, 00:08:34.549 "supported_io_types": { 00:08:34.549 "read": true, 00:08:34.549 "write": true, 00:08:34.549 "unmap": true, 00:08:34.549 "flush": true, 00:08:34.549 "reset": true, 00:08:34.549 "nvme_admin": false, 00:08:34.549 "nvme_io": false, 00:08:34.549 "nvme_io_md": false, 00:08:34.549 "write_zeroes": true, 00:08:34.549 "zcopy": false, 00:08:34.549 "get_zone_info": false, 00:08:34.549 "zone_management": false, 00:08:34.549 "zone_append": false, 00:08:34.549 "compare": false, 00:08:34.549 "compare_and_write": false, 00:08:34.549 "abort": false, 00:08:34.549 "seek_hole": false, 00:08:34.549 "seek_data": false, 00:08:34.549 "copy": false, 00:08:34.549 "nvme_iov_md": false 00:08:34.549 }, 00:08:34.549 "memory_domains": [ 00:08:34.549 { 00:08:34.549 "dma_device_id": "system", 00:08:34.549 "dma_device_type": 1 00:08:34.549 }, 00:08:34.549 { 00:08:34.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.549 "dma_device_type": 2 00:08:34.549 }, 00:08:34.549 { 00:08:34.549 "dma_device_id": "system", 00:08:34.549 "dma_device_type": 1 00:08:34.549 }, 00:08:34.549 { 00:08:34.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.549 "dma_device_type": 2 00:08:34.549 }, 00:08:34.549 { 00:08:34.549 "dma_device_id": "system", 00:08:34.549 "dma_device_type": 1 00:08:34.549 }, 00:08:34.549 { 00:08:34.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.549 "dma_device_type": 2 00:08:34.549 } 00:08:34.549 ], 00:08:34.549 "driver_specific": { 00:08:34.549 "raid": { 00:08:34.549 "uuid": "67de80d8-c8d4-4c58-9b8b-dadb1de3b109", 00:08:34.549 "strip_size_kb": 64, 00:08:34.549 "state": "online", 00:08:34.549 "raid_level": "raid0", 00:08:34.549 "superblock": true, 00:08:34.549 "num_base_bdevs": 3, 00:08:34.549 "num_base_bdevs_discovered": 3, 00:08:34.549 "num_base_bdevs_operational": 3, 00:08:34.549 "base_bdevs_list": [ 00:08:34.549 { 00:08:34.549 "name": "pt1", 00:08:34.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.549 "is_configured": true, 00:08:34.549 "data_offset": 2048, 00:08:34.549 "data_size": 63488 00:08:34.549 }, 00:08:34.549 { 00:08:34.549 "name": "pt2", 00:08:34.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.549 "is_configured": true, 00:08:34.549 "data_offset": 2048, 00:08:34.549 "data_size": 63488 00:08:34.549 }, 00:08:34.549 { 00:08:34.549 "name": "pt3", 00:08:34.549 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:34.549 "is_configured": true, 00:08:34.549 "data_offset": 2048, 00:08:34.549 "data_size": 63488 00:08:34.549 } 00:08:34.549 ] 00:08:34.549 } 00:08:34.549 } 00:08:34.549 }' 00:08:34.549 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.549 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:34.549 pt2 00:08:34.549 pt3' 00:08:34.549 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.811 [2024-12-07 16:34:33.568216] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 67de80d8-c8d4-4c58-9b8b-dadb1de3b109 '!=' 67de80d8-c8d4-4c58-9b8b-dadb1de3b109 ']' 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:34.811 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.812 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:34.812 16:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76504 00:08:34.812 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76504 ']' 00:08:34.812 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76504 00:08:34.812 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:34.812 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.812 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76504 00:08:34.812 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.812 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.812 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76504' 00:08:34.812 killing process with pid 76504 00:08:34.812 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76504 00:08:34.812 [2024-12-07 16:34:33.636292] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.812 [2024-12-07 16:34:33.636452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.812 16:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76504 00:08:34.812 [2024-12-07 16:34:33.636555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.812 [2024-12-07 16:34:33.636568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:34.812 [2024-12-07 16:34:33.697676] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.381 16:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:35.381 ************************************ 00:08:35.381 END TEST raid_superblock_test 00:08:35.381 ************************************ 00:08:35.381 00:08:35.381 real 0m3.925s 00:08:35.381 user 0m5.898s 00:08:35.381 sys 0m0.926s 00:08:35.381 16:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.381 16:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.381 16:34:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:35.381 16:34:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:35.381 16:34:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.381 16:34:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.381 ************************************ 00:08:35.381 START TEST raid_read_error_test 00:08:35.381 ************************************ 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7jVJqSpR9f 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76746 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76746 00:08:35.381 16:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76746 ']' 00:08:35.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.382 16:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.382 16:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.382 16:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.382 16:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.382 16:34:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.382 [2024-12-07 16:34:34.241606] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:35.382 [2024-12-07 16:34:34.241798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76746 ] 00:08:35.640 [2024-12-07 16:34:34.391535] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.640 [2024-12-07 16:34:34.463836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.900 [2024-12-07 16:34:34.540782] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.900 [2024-12-07 16:34:34.540822] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.470 BaseBdev1_malloc 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.470 true 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.470 [2024-12-07 16:34:35.124116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:36.470 [2024-12-07 16:34:35.124186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.470 [2024-12-07 16:34:35.124207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:36.470 [2024-12-07 16:34:35.124223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.470 [2024-12-07 16:34:35.126704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.470 [2024-12-07 16:34:35.126781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:36.470 BaseBdev1 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.470 BaseBdev2_malloc 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.470 true 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.470 [2024-12-07 16:34:35.180478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:36.470 [2024-12-07 16:34:35.180534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.470 [2024-12-07 16:34:35.180552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:36.470 [2024-12-07 16:34:35.180561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.470 [2024-12-07 16:34:35.182914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.470 [2024-12-07 16:34:35.183022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:36.470 BaseBdev2 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.470 BaseBdev3_malloc 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.470 true 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.470 [2024-12-07 16:34:35.227657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:36.470 [2024-12-07 16:34:35.227710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.470 [2024-12-07 16:34:35.227731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:36.470 [2024-12-07 16:34:35.227741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.470 [2024-12-07 16:34:35.230099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.470 [2024-12-07 16:34:35.230196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:36.470 BaseBdev3 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.470 [2024-12-07 16:34:35.239716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.470 [2024-12-07 16:34:35.241798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.470 [2024-12-07 16:34:35.241879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.470 [2024-12-07 16:34:35.242053] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:36.470 [2024-12-07 16:34:35.242069] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:36.470 [2024-12-07 16:34:35.242311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:36.470 [2024-12-07 16:34:35.242463] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:36.470 [2024-12-07 16:34:35.242474] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:36.470 [2024-12-07 16:34:35.242591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.470 "name": "raid_bdev1", 00:08:36.470 "uuid": "c9832d7c-d0d0-4cb2-9a6f-c9b4c7b6e7d7", 00:08:36.470 "strip_size_kb": 64, 00:08:36.470 "state": "online", 00:08:36.470 "raid_level": "raid0", 00:08:36.470 "superblock": true, 00:08:36.470 "num_base_bdevs": 3, 00:08:36.470 "num_base_bdevs_discovered": 3, 00:08:36.470 "num_base_bdevs_operational": 3, 00:08:36.470 "base_bdevs_list": [ 00:08:36.470 { 00:08:36.470 "name": "BaseBdev1", 00:08:36.470 "uuid": "dd4ea9e3-6078-5a06-a18a-adf7c54f9bc8", 00:08:36.470 "is_configured": true, 00:08:36.470 "data_offset": 2048, 00:08:36.470 "data_size": 63488 00:08:36.470 }, 00:08:36.470 { 00:08:36.470 "name": "BaseBdev2", 00:08:36.470 "uuid": "3e05e952-b265-5336-8684-132c078b0cae", 00:08:36.470 "is_configured": true, 00:08:36.470 "data_offset": 2048, 00:08:36.470 "data_size": 63488 00:08:36.470 }, 00:08:36.470 { 00:08:36.470 "name": "BaseBdev3", 00:08:36.470 "uuid": "20889a46-7ae9-5e71-bd84-4bd3c1593871", 00:08:36.470 "is_configured": true, 00:08:36.470 "data_offset": 2048, 00:08:36.470 "data_size": 63488 00:08:36.470 } 00:08:36.470 ] 00:08:36.470 }' 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.470 16:34:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.039 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:37.039 16:34:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:37.039 [2024-12-07 16:34:35.771384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.979 "name": "raid_bdev1", 00:08:37.979 "uuid": "c9832d7c-d0d0-4cb2-9a6f-c9b4c7b6e7d7", 00:08:37.979 "strip_size_kb": 64, 00:08:37.979 "state": "online", 00:08:37.979 "raid_level": "raid0", 00:08:37.979 "superblock": true, 00:08:37.979 "num_base_bdevs": 3, 00:08:37.979 "num_base_bdevs_discovered": 3, 00:08:37.979 "num_base_bdevs_operational": 3, 00:08:37.979 "base_bdevs_list": [ 00:08:37.979 { 00:08:37.979 "name": "BaseBdev1", 00:08:37.979 "uuid": "dd4ea9e3-6078-5a06-a18a-adf7c54f9bc8", 00:08:37.979 "is_configured": true, 00:08:37.979 "data_offset": 2048, 00:08:37.979 "data_size": 63488 00:08:37.979 }, 00:08:37.979 { 00:08:37.979 "name": "BaseBdev2", 00:08:37.979 "uuid": "3e05e952-b265-5336-8684-132c078b0cae", 00:08:37.979 "is_configured": true, 00:08:37.979 "data_offset": 2048, 00:08:37.979 "data_size": 63488 00:08:37.979 }, 00:08:37.979 { 00:08:37.979 "name": "BaseBdev3", 00:08:37.979 "uuid": "20889a46-7ae9-5e71-bd84-4bd3c1593871", 00:08:37.979 "is_configured": true, 00:08:37.979 "data_offset": 2048, 00:08:37.979 "data_size": 63488 00:08:37.979 } 00:08:37.979 ] 00:08:37.979 }' 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.979 16:34:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.548 [2024-12-07 16:34:37.143727] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.548 [2024-12-07 16:34:37.143774] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.548 [2024-12-07 16:34:37.146313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.548 [2024-12-07 16:34:37.146413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.548 [2024-12-07 16:34:37.146454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.548 [2024-12-07 16:34:37.146474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:38.548 { 00:08:38.548 "results": [ 00:08:38.548 { 00:08:38.548 "job": "raid_bdev1", 00:08:38.548 "core_mask": "0x1", 00:08:38.548 "workload": "randrw", 00:08:38.548 "percentage": 50, 00:08:38.548 "status": "finished", 00:08:38.548 "queue_depth": 1, 00:08:38.548 "io_size": 131072, 00:08:38.548 "runtime": 1.372803, 00:08:38.548 "iops": 14632.106718881005, 00:08:38.548 "mibps": 1829.0133398601256, 00:08:38.548 "io_failed": 1, 00:08:38.548 "io_timeout": 0, 00:08:38.548 "avg_latency_us": 96.04957888348038, 00:08:38.548 "min_latency_us": 24.929257641921396, 00:08:38.548 "max_latency_us": 1438.071615720524 00:08:38.548 } 00:08:38.548 ], 00:08:38.548 "core_count": 1 00:08:38.548 } 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76746 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76746 ']' 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76746 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76746 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.548 killing process with pid 76746 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76746' 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76746 00:08:38.548 [2024-12-07 16:34:37.193031] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.548 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76746 00:08:38.548 [2024-12-07 16:34:37.240782] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.808 16:34:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7jVJqSpR9f 00:08:38.808 16:34:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:38.808 16:34:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:38.808 16:34:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:38.808 16:34:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:38.808 16:34:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:38.808 16:34:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:38.808 16:34:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:38.808 00:08:38.808 real 0m3.493s 00:08:38.808 user 0m4.247s 00:08:38.808 sys 0m0.645s 00:08:38.808 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.808 16:34:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.808 ************************************ 00:08:38.809 END TEST raid_read_error_test 00:08:38.809 ************************************ 00:08:38.809 16:34:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:38.809 16:34:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:38.809 16:34:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.809 16:34:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.809 ************************************ 00:08:38.809 START TEST raid_write_error_test 00:08:38.809 ************************************ 00:08:38.809 16:34:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:38.809 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:38.809 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:38.809 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:39.068 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:39.068 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lQXNb71XKc 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76881 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76881 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76881 ']' 00:08:39.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.069 16:34:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.069 [2024-12-07 16:34:37.813059] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:39.069 [2024-12-07 16:34:37.813196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76881 ] 00:08:39.329 [2024-12-07 16:34:37.978136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.329 [2024-12-07 16:34:38.051416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.329 [2024-12-07 16:34:38.130411] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.329 [2024-12-07 16:34:38.130545] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.905 BaseBdev1_malloc 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.905 true 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.905 [2024-12-07 16:34:38.697079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:39.905 [2024-12-07 16:34:38.697154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.905 [2024-12-07 16:34:38.697183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:39.905 [2024-12-07 16:34:38.697192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.905 [2024-12-07 16:34:38.699670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.905 [2024-12-07 16:34:38.699779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:39.905 BaseBdev1 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.905 BaseBdev2_malloc 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.905 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.906 true 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.906 [2024-12-07 16:34:38.744775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:39.906 [2024-12-07 16:34:38.744827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.906 [2024-12-07 16:34:38.744846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:39.906 [2024-12-07 16:34:38.744856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.906 [2024-12-07 16:34:38.747193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.906 [2024-12-07 16:34:38.747226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:39.906 BaseBdev2 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.906 BaseBdev3_malloc 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.906 true 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.906 [2024-12-07 16:34:38.779403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:39.906 [2024-12-07 16:34:38.779509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.906 [2024-12-07 16:34:38.779532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:39.906 [2024-12-07 16:34:38.779542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.906 [2024-12-07 16:34:38.781882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.906 [2024-12-07 16:34:38.781914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:39.906 BaseBdev3 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.906 [2024-12-07 16:34:38.787455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.906 [2024-12-07 16:34:38.789517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.906 [2024-12-07 16:34:38.789612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.906 [2024-12-07 16:34:38.789785] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:39.906 [2024-12-07 16:34:38.789800] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:39.906 [2024-12-07 16:34:38.790047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:39.906 [2024-12-07 16:34:38.790183] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:39.906 [2024-12-07 16:34:38.790193] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:39.906 [2024-12-07 16:34:38.790327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.906 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.184 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.185 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.185 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.185 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.185 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.185 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.185 "name": "raid_bdev1", 00:08:40.185 "uuid": "890e1c48-3839-4cb2-bf90-b22af668603a", 00:08:40.185 "strip_size_kb": 64, 00:08:40.185 "state": "online", 00:08:40.185 "raid_level": "raid0", 00:08:40.185 "superblock": true, 00:08:40.185 "num_base_bdevs": 3, 00:08:40.185 "num_base_bdevs_discovered": 3, 00:08:40.185 "num_base_bdevs_operational": 3, 00:08:40.185 "base_bdevs_list": [ 00:08:40.185 { 00:08:40.185 "name": "BaseBdev1", 00:08:40.185 "uuid": "73a3ae21-7b9f-5b0a-bbb4-0f30945f7723", 00:08:40.185 "is_configured": true, 00:08:40.185 "data_offset": 2048, 00:08:40.185 "data_size": 63488 00:08:40.185 }, 00:08:40.185 { 00:08:40.185 "name": "BaseBdev2", 00:08:40.185 "uuid": "c6e45afa-803f-5c1b-805b-cd3e1cb3e1da", 00:08:40.185 "is_configured": true, 00:08:40.185 "data_offset": 2048, 00:08:40.185 "data_size": 63488 00:08:40.185 }, 00:08:40.185 { 00:08:40.185 "name": "BaseBdev3", 00:08:40.185 "uuid": "af435549-5585-5a40-80a8-25e9884695e0", 00:08:40.185 "is_configured": true, 00:08:40.185 "data_offset": 2048, 00:08:40.185 "data_size": 63488 00:08:40.185 } 00:08:40.185 ] 00:08:40.185 }' 00:08:40.185 16:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.185 16:34:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.454 16:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:40.454 16:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:40.454 [2024-12-07 16:34:39.306976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.391 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.650 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.650 "name": "raid_bdev1", 00:08:41.650 "uuid": "890e1c48-3839-4cb2-bf90-b22af668603a", 00:08:41.650 "strip_size_kb": 64, 00:08:41.650 "state": "online", 00:08:41.650 "raid_level": "raid0", 00:08:41.650 "superblock": true, 00:08:41.650 "num_base_bdevs": 3, 00:08:41.650 "num_base_bdevs_discovered": 3, 00:08:41.650 "num_base_bdevs_operational": 3, 00:08:41.650 "base_bdevs_list": [ 00:08:41.650 { 00:08:41.650 "name": "BaseBdev1", 00:08:41.650 "uuid": "73a3ae21-7b9f-5b0a-bbb4-0f30945f7723", 00:08:41.650 "is_configured": true, 00:08:41.650 "data_offset": 2048, 00:08:41.650 "data_size": 63488 00:08:41.650 }, 00:08:41.650 { 00:08:41.650 "name": "BaseBdev2", 00:08:41.650 "uuid": "c6e45afa-803f-5c1b-805b-cd3e1cb3e1da", 00:08:41.650 "is_configured": true, 00:08:41.650 "data_offset": 2048, 00:08:41.650 "data_size": 63488 00:08:41.650 }, 00:08:41.650 { 00:08:41.650 "name": "BaseBdev3", 00:08:41.650 "uuid": "af435549-5585-5a40-80a8-25e9884695e0", 00:08:41.650 "is_configured": true, 00:08:41.650 "data_offset": 2048, 00:08:41.650 "data_size": 63488 00:08:41.650 } 00:08:41.650 ] 00:08:41.650 }' 00:08:41.650 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.650 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.910 [2024-12-07 16:34:40.647386] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.910 [2024-12-07 16:34:40.647470] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.910 [2024-12-07 16:34:40.649947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.910 [2024-12-07 16:34:40.650035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.910 [2024-12-07 16:34:40.650093] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.910 [2024-12-07 16:34:40.650157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:41.910 { 00:08:41.910 "results": [ 00:08:41.910 { 00:08:41.910 "job": "raid_bdev1", 00:08:41.910 "core_mask": "0x1", 00:08:41.910 "workload": "randrw", 00:08:41.910 "percentage": 50, 00:08:41.910 "status": "finished", 00:08:41.910 "queue_depth": 1, 00:08:41.910 "io_size": 131072, 00:08:41.910 "runtime": 1.341038, 00:08:41.910 "iops": 14634.932045176945, 00:08:41.910 "mibps": 1829.3665056471182, 00:08:41.910 "io_failed": 1, 00:08:41.910 "io_timeout": 0, 00:08:41.910 "avg_latency_us": 95.83759579031025, 00:08:41.910 "min_latency_us": 20.79301310043668, 00:08:41.910 "max_latency_us": 1373.6803493449781 00:08:41.910 } 00:08:41.910 ], 00:08:41.910 "core_count": 1 00:08:41.910 } 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76881 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76881 ']' 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76881 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76881 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.910 killing process with pid 76881 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76881' 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76881 00:08:41.910 [2024-12-07 16:34:40.688082] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.910 16:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76881 00:08:41.910 [2024-12-07 16:34:40.736665] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.480 16:34:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lQXNb71XKc 00:08:42.480 16:34:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:42.480 16:34:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:42.480 16:34:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:42.480 16:34:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:42.480 ************************************ 00:08:42.480 END TEST raid_write_error_test 00:08:42.480 ************************************ 00:08:42.480 16:34:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.480 16:34:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:42.480 16:34:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:42.480 00:08:42.480 real 0m3.412s 00:08:42.480 user 0m4.122s 00:08:42.480 sys 0m0.643s 00:08:42.480 16:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.480 16:34:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.480 16:34:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:42.480 16:34:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:42.480 16:34:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:42.480 16:34:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.480 16:34:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.480 ************************************ 00:08:42.480 START TEST raid_state_function_test 00:08:42.480 ************************************ 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77008 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77008' 00:08:42.480 Process raid pid: 77008 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77008 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 77008 ']' 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.480 16:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.480 [2024-12-07 16:34:41.286036] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:42.480 [2024-12-07 16:34:41.286214] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.740 [2024-12-07 16:34:41.451621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.740 [2024-12-07 16:34:41.520099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.740 [2024-12-07 16:34:41.596696] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.740 [2024-12-07 16:34:41.596842] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.309 [2024-12-07 16:34:42.156368] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.309 [2024-12-07 16:34:42.156472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.309 [2024-12-07 16:34:42.156506] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.309 [2024-12-07 16:34:42.156529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.309 [2024-12-07 16:34:42.156546] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:43.309 [2024-12-07 16:34:42.156570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.309 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.569 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.569 "name": "Existed_Raid", 00:08:43.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.569 "strip_size_kb": 64, 00:08:43.569 "state": "configuring", 00:08:43.569 "raid_level": "concat", 00:08:43.569 "superblock": false, 00:08:43.569 "num_base_bdevs": 3, 00:08:43.569 "num_base_bdevs_discovered": 0, 00:08:43.569 "num_base_bdevs_operational": 3, 00:08:43.569 "base_bdevs_list": [ 00:08:43.569 { 00:08:43.569 "name": "BaseBdev1", 00:08:43.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.569 "is_configured": false, 00:08:43.569 "data_offset": 0, 00:08:43.569 "data_size": 0 00:08:43.569 }, 00:08:43.569 { 00:08:43.569 "name": "BaseBdev2", 00:08:43.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.569 "is_configured": false, 00:08:43.569 "data_offset": 0, 00:08:43.569 "data_size": 0 00:08:43.569 }, 00:08:43.569 { 00:08:43.569 "name": "BaseBdev3", 00:08:43.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.569 "is_configured": false, 00:08:43.569 "data_offset": 0, 00:08:43.569 "data_size": 0 00:08:43.569 } 00:08:43.569 ] 00:08:43.569 }' 00:08:43.569 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.569 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.828 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:43.828 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.828 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.828 [2024-12-07 16:34:42.607513] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:43.828 [2024-12-07 16:34:42.607610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:43.828 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.828 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:43.828 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.828 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.828 [2024-12-07 16:34:42.619491] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.828 [2024-12-07 16:34:42.619571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.828 [2024-12-07 16:34:42.619597] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.828 [2024-12-07 16:34:42.619620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.828 [2024-12-07 16:34:42.619637] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:43.828 [2024-12-07 16:34:42.619658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:43.828 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.829 [2024-12-07 16:34:42.646648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.829 BaseBdev1 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.829 [ 00:08:43.829 { 00:08:43.829 "name": "BaseBdev1", 00:08:43.829 "aliases": [ 00:08:43.829 "257d7909-5371-4ac9-8006-cc068deec044" 00:08:43.829 ], 00:08:43.829 "product_name": "Malloc disk", 00:08:43.829 "block_size": 512, 00:08:43.829 "num_blocks": 65536, 00:08:43.829 "uuid": "257d7909-5371-4ac9-8006-cc068deec044", 00:08:43.829 "assigned_rate_limits": { 00:08:43.829 "rw_ios_per_sec": 0, 00:08:43.829 "rw_mbytes_per_sec": 0, 00:08:43.829 "r_mbytes_per_sec": 0, 00:08:43.829 "w_mbytes_per_sec": 0 00:08:43.829 }, 00:08:43.829 "claimed": true, 00:08:43.829 "claim_type": "exclusive_write", 00:08:43.829 "zoned": false, 00:08:43.829 "supported_io_types": { 00:08:43.829 "read": true, 00:08:43.829 "write": true, 00:08:43.829 "unmap": true, 00:08:43.829 "flush": true, 00:08:43.829 "reset": true, 00:08:43.829 "nvme_admin": false, 00:08:43.829 "nvme_io": false, 00:08:43.829 "nvme_io_md": false, 00:08:43.829 "write_zeroes": true, 00:08:43.829 "zcopy": true, 00:08:43.829 "get_zone_info": false, 00:08:43.829 "zone_management": false, 00:08:43.829 "zone_append": false, 00:08:43.829 "compare": false, 00:08:43.829 "compare_and_write": false, 00:08:43.829 "abort": true, 00:08:43.829 "seek_hole": false, 00:08:43.829 "seek_data": false, 00:08:43.829 "copy": true, 00:08:43.829 "nvme_iov_md": false 00:08:43.829 }, 00:08:43.829 "memory_domains": [ 00:08:43.829 { 00:08:43.829 "dma_device_id": "system", 00:08:43.829 "dma_device_type": 1 00:08:43.829 }, 00:08:43.829 { 00:08:43.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.829 "dma_device_type": 2 00:08:43.829 } 00:08:43.829 ], 00:08:43.829 "driver_specific": {} 00:08:43.829 } 00:08:43.829 ] 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.829 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.088 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.088 "name": "Existed_Raid", 00:08:44.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.088 "strip_size_kb": 64, 00:08:44.088 "state": "configuring", 00:08:44.088 "raid_level": "concat", 00:08:44.088 "superblock": false, 00:08:44.088 "num_base_bdevs": 3, 00:08:44.088 "num_base_bdevs_discovered": 1, 00:08:44.088 "num_base_bdevs_operational": 3, 00:08:44.088 "base_bdevs_list": [ 00:08:44.088 { 00:08:44.088 "name": "BaseBdev1", 00:08:44.088 "uuid": "257d7909-5371-4ac9-8006-cc068deec044", 00:08:44.088 "is_configured": true, 00:08:44.088 "data_offset": 0, 00:08:44.088 "data_size": 65536 00:08:44.088 }, 00:08:44.088 { 00:08:44.088 "name": "BaseBdev2", 00:08:44.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.088 "is_configured": false, 00:08:44.088 "data_offset": 0, 00:08:44.088 "data_size": 0 00:08:44.088 }, 00:08:44.088 { 00:08:44.088 "name": "BaseBdev3", 00:08:44.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.088 "is_configured": false, 00:08:44.088 "data_offset": 0, 00:08:44.088 "data_size": 0 00:08:44.088 } 00:08:44.088 ] 00:08:44.088 }' 00:08:44.088 16:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.088 16:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.349 [2024-12-07 16:34:43.153843] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.349 [2024-12-07 16:34:43.153951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.349 [2024-12-07 16:34:43.165839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.349 [2024-12-07 16:34:43.168085] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.349 [2024-12-07 16:34:43.168159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.349 [2024-12-07 16:34:43.168187] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:44.349 [2024-12-07 16:34:43.168220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.349 "name": "Existed_Raid", 00:08:44.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.349 "strip_size_kb": 64, 00:08:44.349 "state": "configuring", 00:08:44.349 "raid_level": "concat", 00:08:44.349 "superblock": false, 00:08:44.349 "num_base_bdevs": 3, 00:08:44.349 "num_base_bdevs_discovered": 1, 00:08:44.349 "num_base_bdevs_operational": 3, 00:08:44.349 "base_bdevs_list": [ 00:08:44.349 { 00:08:44.349 "name": "BaseBdev1", 00:08:44.349 "uuid": "257d7909-5371-4ac9-8006-cc068deec044", 00:08:44.349 "is_configured": true, 00:08:44.349 "data_offset": 0, 00:08:44.349 "data_size": 65536 00:08:44.349 }, 00:08:44.349 { 00:08:44.349 "name": "BaseBdev2", 00:08:44.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.349 "is_configured": false, 00:08:44.349 "data_offset": 0, 00:08:44.349 "data_size": 0 00:08:44.349 }, 00:08:44.349 { 00:08:44.349 "name": "BaseBdev3", 00:08:44.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.349 "is_configured": false, 00:08:44.349 "data_offset": 0, 00:08:44.349 "data_size": 0 00:08:44.349 } 00:08:44.349 ] 00:08:44.349 }' 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.349 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.921 [2024-12-07 16:34:43.643730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.921 BaseBdev2 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.921 [ 00:08:44.921 { 00:08:44.921 "name": "BaseBdev2", 00:08:44.921 "aliases": [ 00:08:44.921 "7d38db79-4e0b-47a0-a903-b803d85ff3f8" 00:08:44.921 ], 00:08:44.921 "product_name": "Malloc disk", 00:08:44.921 "block_size": 512, 00:08:44.921 "num_blocks": 65536, 00:08:44.921 "uuid": "7d38db79-4e0b-47a0-a903-b803d85ff3f8", 00:08:44.921 "assigned_rate_limits": { 00:08:44.921 "rw_ios_per_sec": 0, 00:08:44.921 "rw_mbytes_per_sec": 0, 00:08:44.921 "r_mbytes_per_sec": 0, 00:08:44.921 "w_mbytes_per_sec": 0 00:08:44.921 }, 00:08:44.921 "claimed": true, 00:08:44.921 "claim_type": "exclusive_write", 00:08:44.921 "zoned": false, 00:08:44.921 "supported_io_types": { 00:08:44.921 "read": true, 00:08:44.921 "write": true, 00:08:44.921 "unmap": true, 00:08:44.921 "flush": true, 00:08:44.921 "reset": true, 00:08:44.921 "nvme_admin": false, 00:08:44.921 "nvme_io": false, 00:08:44.921 "nvme_io_md": false, 00:08:44.921 "write_zeroes": true, 00:08:44.921 "zcopy": true, 00:08:44.921 "get_zone_info": false, 00:08:44.921 "zone_management": false, 00:08:44.921 "zone_append": false, 00:08:44.921 "compare": false, 00:08:44.921 "compare_and_write": false, 00:08:44.921 "abort": true, 00:08:44.921 "seek_hole": false, 00:08:44.921 "seek_data": false, 00:08:44.921 "copy": true, 00:08:44.921 "nvme_iov_md": false 00:08:44.921 }, 00:08:44.921 "memory_domains": [ 00:08:44.921 { 00:08:44.921 "dma_device_id": "system", 00:08:44.921 "dma_device_type": 1 00:08:44.921 }, 00:08:44.921 { 00:08:44.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.921 "dma_device_type": 2 00:08:44.921 } 00:08:44.921 ], 00:08:44.921 "driver_specific": {} 00:08:44.921 } 00:08:44.921 ] 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.921 "name": "Existed_Raid", 00:08:44.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.921 "strip_size_kb": 64, 00:08:44.921 "state": "configuring", 00:08:44.921 "raid_level": "concat", 00:08:44.921 "superblock": false, 00:08:44.921 "num_base_bdevs": 3, 00:08:44.921 "num_base_bdevs_discovered": 2, 00:08:44.921 "num_base_bdevs_operational": 3, 00:08:44.921 "base_bdevs_list": [ 00:08:44.921 { 00:08:44.921 "name": "BaseBdev1", 00:08:44.921 "uuid": "257d7909-5371-4ac9-8006-cc068deec044", 00:08:44.921 "is_configured": true, 00:08:44.921 "data_offset": 0, 00:08:44.921 "data_size": 65536 00:08:44.921 }, 00:08:44.921 { 00:08:44.921 "name": "BaseBdev2", 00:08:44.921 "uuid": "7d38db79-4e0b-47a0-a903-b803d85ff3f8", 00:08:44.921 "is_configured": true, 00:08:44.921 "data_offset": 0, 00:08:44.921 "data_size": 65536 00:08:44.921 }, 00:08:44.921 { 00:08:44.921 "name": "BaseBdev3", 00:08:44.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.921 "is_configured": false, 00:08:44.921 "data_offset": 0, 00:08:44.921 "data_size": 0 00:08:44.921 } 00:08:44.921 ] 00:08:44.921 }' 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.921 16:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.490 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.491 [2024-12-07 16:34:44.160121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:45.491 [2024-12-07 16:34:44.160271] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:45.491 [2024-12-07 16:34:44.160289] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:45.491 [2024-12-07 16:34:44.160687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:45.491 [2024-12-07 16:34:44.160835] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:45.491 [2024-12-07 16:34:44.160843] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:45.491 [2024-12-07 16:34:44.161071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.491 BaseBdev3 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.491 [ 00:08:45.491 { 00:08:45.491 "name": "BaseBdev3", 00:08:45.491 "aliases": [ 00:08:45.491 "1761d68c-83e2-498a-81cd-7c81a8525758" 00:08:45.491 ], 00:08:45.491 "product_name": "Malloc disk", 00:08:45.491 "block_size": 512, 00:08:45.491 "num_blocks": 65536, 00:08:45.491 "uuid": "1761d68c-83e2-498a-81cd-7c81a8525758", 00:08:45.491 "assigned_rate_limits": { 00:08:45.491 "rw_ios_per_sec": 0, 00:08:45.491 "rw_mbytes_per_sec": 0, 00:08:45.491 "r_mbytes_per_sec": 0, 00:08:45.491 "w_mbytes_per_sec": 0 00:08:45.491 }, 00:08:45.491 "claimed": true, 00:08:45.491 "claim_type": "exclusive_write", 00:08:45.491 "zoned": false, 00:08:45.491 "supported_io_types": { 00:08:45.491 "read": true, 00:08:45.491 "write": true, 00:08:45.491 "unmap": true, 00:08:45.491 "flush": true, 00:08:45.491 "reset": true, 00:08:45.491 "nvme_admin": false, 00:08:45.491 "nvme_io": false, 00:08:45.491 "nvme_io_md": false, 00:08:45.491 "write_zeroes": true, 00:08:45.491 "zcopy": true, 00:08:45.491 "get_zone_info": false, 00:08:45.491 "zone_management": false, 00:08:45.491 "zone_append": false, 00:08:45.491 "compare": false, 00:08:45.491 "compare_and_write": false, 00:08:45.491 "abort": true, 00:08:45.491 "seek_hole": false, 00:08:45.491 "seek_data": false, 00:08:45.491 "copy": true, 00:08:45.491 "nvme_iov_md": false 00:08:45.491 }, 00:08:45.491 "memory_domains": [ 00:08:45.491 { 00:08:45.491 "dma_device_id": "system", 00:08:45.491 "dma_device_type": 1 00:08:45.491 }, 00:08:45.491 { 00:08:45.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.491 "dma_device_type": 2 00:08:45.491 } 00:08:45.491 ], 00:08:45.491 "driver_specific": {} 00:08:45.491 } 00:08:45.491 ] 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.491 "name": "Existed_Raid", 00:08:45.491 "uuid": "0509a6c1-55c8-4a0b-89ab-eba4f8af0fad", 00:08:45.491 "strip_size_kb": 64, 00:08:45.491 "state": "online", 00:08:45.491 "raid_level": "concat", 00:08:45.491 "superblock": false, 00:08:45.491 "num_base_bdevs": 3, 00:08:45.491 "num_base_bdevs_discovered": 3, 00:08:45.491 "num_base_bdevs_operational": 3, 00:08:45.491 "base_bdevs_list": [ 00:08:45.491 { 00:08:45.491 "name": "BaseBdev1", 00:08:45.491 "uuid": "257d7909-5371-4ac9-8006-cc068deec044", 00:08:45.491 "is_configured": true, 00:08:45.491 "data_offset": 0, 00:08:45.491 "data_size": 65536 00:08:45.491 }, 00:08:45.491 { 00:08:45.491 "name": "BaseBdev2", 00:08:45.491 "uuid": "7d38db79-4e0b-47a0-a903-b803d85ff3f8", 00:08:45.491 "is_configured": true, 00:08:45.491 "data_offset": 0, 00:08:45.491 "data_size": 65536 00:08:45.491 }, 00:08:45.491 { 00:08:45.491 "name": "BaseBdev3", 00:08:45.491 "uuid": "1761d68c-83e2-498a-81cd-7c81a8525758", 00:08:45.491 "is_configured": true, 00:08:45.491 "data_offset": 0, 00:08:45.491 "data_size": 65536 00:08:45.491 } 00:08:45.491 ] 00:08:45.491 }' 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.491 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.060 [2024-12-07 16:34:44.687616] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.060 "name": "Existed_Raid", 00:08:46.060 "aliases": [ 00:08:46.060 "0509a6c1-55c8-4a0b-89ab-eba4f8af0fad" 00:08:46.060 ], 00:08:46.060 "product_name": "Raid Volume", 00:08:46.060 "block_size": 512, 00:08:46.060 "num_blocks": 196608, 00:08:46.060 "uuid": "0509a6c1-55c8-4a0b-89ab-eba4f8af0fad", 00:08:46.060 "assigned_rate_limits": { 00:08:46.060 "rw_ios_per_sec": 0, 00:08:46.060 "rw_mbytes_per_sec": 0, 00:08:46.060 "r_mbytes_per_sec": 0, 00:08:46.060 "w_mbytes_per_sec": 0 00:08:46.060 }, 00:08:46.060 "claimed": false, 00:08:46.060 "zoned": false, 00:08:46.060 "supported_io_types": { 00:08:46.060 "read": true, 00:08:46.060 "write": true, 00:08:46.060 "unmap": true, 00:08:46.060 "flush": true, 00:08:46.060 "reset": true, 00:08:46.060 "nvme_admin": false, 00:08:46.060 "nvme_io": false, 00:08:46.060 "nvme_io_md": false, 00:08:46.060 "write_zeroes": true, 00:08:46.060 "zcopy": false, 00:08:46.060 "get_zone_info": false, 00:08:46.060 "zone_management": false, 00:08:46.060 "zone_append": false, 00:08:46.060 "compare": false, 00:08:46.060 "compare_and_write": false, 00:08:46.060 "abort": false, 00:08:46.060 "seek_hole": false, 00:08:46.060 "seek_data": false, 00:08:46.060 "copy": false, 00:08:46.060 "nvme_iov_md": false 00:08:46.060 }, 00:08:46.060 "memory_domains": [ 00:08:46.060 { 00:08:46.060 "dma_device_id": "system", 00:08:46.060 "dma_device_type": 1 00:08:46.060 }, 00:08:46.060 { 00:08:46.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.060 "dma_device_type": 2 00:08:46.060 }, 00:08:46.060 { 00:08:46.060 "dma_device_id": "system", 00:08:46.060 "dma_device_type": 1 00:08:46.060 }, 00:08:46.060 { 00:08:46.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.060 "dma_device_type": 2 00:08:46.060 }, 00:08:46.060 { 00:08:46.060 "dma_device_id": "system", 00:08:46.060 "dma_device_type": 1 00:08:46.060 }, 00:08:46.060 { 00:08:46.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.060 "dma_device_type": 2 00:08:46.060 } 00:08:46.060 ], 00:08:46.060 "driver_specific": { 00:08:46.060 "raid": { 00:08:46.060 "uuid": "0509a6c1-55c8-4a0b-89ab-eba4f8af0fad", 00:08:46.060 "strip_size_kb": 64, 00:08:46.060 "state": "online", 00:08:46.060 "raid_level": "concat", 00:08:46.060 "superblock": false, 00:08:46.060 "num_base_bdevs": 3, 00:08:46.060 "num_base_bdevs_discovered": 3, 00:08:46.060 "num_base_bdevs_operational": 3, 00:08:46.060 "base_bdevs_list": [ 00:08:46.060 { 00:08:46.060 "name": "BaseBdev1", 00:08:46.060 "uuid": "257d7909-5371-4ac9-8006-cc068deec044", 00:08:46.060 "is_configured": true, 00:08:46.060 "data_offset": 0, 00:08:46.060 "data_size": 65536 00:08:46.060 }, 00:08:46.060 { 00:08:46.060 "name": "BaseBdev2", 00:08:46.060 "uuid": "7d38db79-4e0b-47a0-a903-b803d85ff3f8", 00:08:46.060 "is_configured": true, 00:08:46.060 "data_offset": 0, 00:08:46.060 "data_size": 65536 00:08:46.060 }, 00:08:46.060 { 00:08:46.060 "name": "BaseBdev3", 00:08:46.060 "uuid": "1761d68c-83e2-498a-81cd-7c81a8525758", 00:08:46.060 "is_configured": true, 00:08:46.060 "data_offset": 0, 00:08:46.060 "data_size": 65536 00:08:46.060 } 00:08:46.060 ] 00:08:46.060 } 00:08:46.060 } 00:08:46.060 }' 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:46.060 BaseBdev2 00:08:46.060 BaseBdev3' 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:46.060 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.061 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.061 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.061 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.321 [2024-12-07 16:34:44.970852] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:46.321 [2024-12-07 16:34:44.970945] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.321 [2024-12-07 16:34:44.971034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.321 16:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.321 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.321 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.321 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.321 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.321 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.321 "name": "Existed_Raid", 00:08:46.321 "uuid": "0509a6c1-55c8-4a0b-89ab-eba4f8af0fad", 00:08:46.321 "strip_size_kb": 64, 00:08:46.321 "state": "offline", 00:08:46.321 "raid_level": "concat", 00:08:46.321 "superblock": false, 00:08:46.321 "num_base_bdevs": 3, 00:08:46.321 "num_base_bdevs_discovered": 2, 00:08:46.321 "num_base_bdevs_operational": 2, 00:08:46.321 "base_bdevs_list": [ 00:08:46.321 { 00:08:46.321 "name": null, 00:08:46.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.321 "is_configured": false, 00:08:46.321 "data_offset": 0, 00:08:46.321 "data_size": 65536 00:08:46.321 }, 00:08:46.321 { 00:08:46.321 "name": "BaseBdev2", 00:08:46.321 "uuid": "7d38db79-4e0b-47a0-a903-b803d85ff3f8", 00:08:46.321 "is_configured": true, 00:08:46.321 "data_offset": 0, 00:08:46.321 "data_size": 65536 00:08:46.321 }, 00:08:46.321 { 00:08:46.321 "name": "BaseBdev3", 00:08:46.321 "uuid": "1761d68c-83e2-498a-81cd-7c81a8525758", 00:08:46.321 "is_configured": true, 00:08:46.321 "data_offset": 0, 00:08:46.321 "data_size": 65536 00:08:46.321 } 00:08:46.321 ] 00:08:46.321 }' 00:08:46.321 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.321 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.581 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:46.581 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:46.581 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.581 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.581 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:46.581 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.581 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.581 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:46.581 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:46.581 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:46.581 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.581 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.840 [2024-12-07 16:34:45.479132] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.840 [2024-12-07 16:34:45.559849] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:46.840 [2024-12-07 16:34:45.559908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.840 BaseBdev2 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.840 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.840 [ 00:08:46.840 { 00:08:46.840 "name": "BaseBdev2", 00:08:46.840 "aliases": [ 00:08:46.840 "330ac9a0-3e27-4337-8cef-63e391800c6f" 00:08:46.840 ], 00:08:46.840 "product_name": "Malloc disk", 00:08:46.840 "block_size": 512, 00:08:46.840 "num_blocks": 65536, 00:08:46.840 "uuid": "330ac9a0-3e27-4337-8cef-63e391800c6f", 00:08:46.840 "assigned_rate_limits": { 00:08:46.840 "rw_ios_per_sec": 0, 00:08:46.840 "rw_mbytes_per_sec": 0, 00:08:46.840 "r_mbytes_per_sec": 0, 00:08:46.840 "w_mbytes_per_sec": 0 00:08:46.840 }, 00:08:46.840 "claimed": false, 00:08:46.840 "zoned": false, 00:08:46.840 "supported_io_types": { 00:08:46.840 "read": true, 00:08:46.840 "write": true, 00:08:46.840 "unmap": true, 00:08:46.840 "flush": true, 00:08:46.840 "reset": true, 00:08:46.840 "nvme_admin": false, 00:08:46.840 "nvme_io": false, 00:08:46.841 "nvme_io_md": false, 00:08:46.841 "write_zeroes": true, 00:08:46.841 "zcopy": true, 00:08:46.841 "get_zone_info": false, 00:08:46.841 "zone_management": false, 00:08:46.841 "zone_append": false, 00:08:46.841 "compare": false, 00:08:46.841 "compare_and_write": false, 00:08:46.841 "abort": true, 00:08:46.841 "seek_hole": false, 00:08:46.841 "seek_data": false, 00:08:46.841 "copy": true, 00:08:46.841 "nvme_iov_md": false 00:08:46.841 }, 00:08:46.841 "memory_domains": [ 00:08:46.841 { 00:08:46.841 "dma_device_id": "system", 00:08:46.841 "dma_device_type": 1 00:08:46.841 }, 00:08:46.841 { 00:08:46.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.841 "dma_device_type": 2 00:08:46.841 } 00:08:46.841 ], 00:08:46.841 "driver_specific": {} 00:08:46.841 } 00:08:46.841 ] 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.841 BaseBdev3 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.841 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.841 [ 00:08:46.841 { 00:08:46.841 "name": "BaseBdev3", 00:08:46.841 "aliases": [ 00:08:46.841 "0c2af85e-07c9-4fb7-89c0-c3473be93665" 00:08:46.841 ], 00:08:46.841 "product_name": "Malloc disk", 00:08:46.841 "block_size": 512, 00:08:46.841 "num_blocks": 65536, 00:08:46.841 "uuid": "0c2af85e-07c9-4fb7-89c0-c3473be93665", 00:08:46.841 "assigned_rate_limits": { 00:08:46.841 "rw_ios_per_sec": 0, 00:08:46.841 "rw_mbytes_per_sec": 0, 00:08:46.841 "r_mbytes_per_sec": 0, 00:08:46.841 "w_mbytes_per_sec": 0 00:08:46.841 }, 00:08:46.841 "claimed": false, 00:08:46.841 "zoned": false, 00:08:47.100 "supported_io_types": { 00:08:47.100 "read": true, 00:08:47.100 "write": true, 00:08:47.100 "unmap": true, 00:08:47.100 "flush": true, 00:08:47.100 "reset": true, 00:08:47.100 "nvme_admin": false, 00:08:47.100 "nvme_io": false, 00:08:47.100 "nvme_io_md": false, 00:08:47.100 "write_zeroes": true, 00:08:47.100 "zcopy": true, 00:08:47.100 "get_zone_info": false, 00:08:47.100 "zone_management": false, 00:08:47.100 "zone_append": false, 00:08:47.100 "compare": false, 00:08:47.100 "compare_and_write": false, 00:08:47.100 "abort": true, 00:08:47.100 "seek_hole": false, 00:08:47.100 "seek_data": false, 00:08:47.100 "copy": true, 00:08:47.100 "nvme_iov_md": false 00:08:47.100 }, 00:08:47.100 "memory_domains": [ 00:08:47.100 { 00:08:47.100 "dma_device_id": "system", 00:08:47.100 "dma_device_type": 1 00:08:47.100 }, 00:08:47.100 { 00:08:47.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.100 "dma_device_type": 2 00:08:47.100 } 00:08:47.100 ], 00:08:47.100 "driver_specific": {} 00:08:47.100 } 00:08:47.100 ] 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.100 [2024-12-07 16:34:45.751828] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.100 [2024-12-07 16:34:45.751920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.100 [2024-12-07 16:34:45.751967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.100 [2024-12-07 16:34:45.754240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.100 "name": "Existed_Raid", 00:08:47.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.100 "strip_size_kb": 64, 00:08:47.100 "state": "configuring", 00:08:47.100 "raid_level": "concat", 00:08:47.100 "superblock": false, 00:08:47.100 "num_base_bdevs": 3, 00:08:47.100 "num_base_bdevs_discovered": 2, 00:08:47.100 "num_base_bdevs_operational": 3, 00:08:47.100 "base_bdevs_list": [ 00:08:47.100 { 00:08:47.100 "name": "BaseBdev1", 00:08:47.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.100 "is_configured": false, 00:08:47.100 "data_offset": 0, 00:08:47.100 "data_size": 0 00:08:47.100 }, 00:08:47.100 { 00:08:47.100 "name": "BaseBdev2", 00:08:47.100 "uuid": "330ac9a0-3e27-4337-8cef-63e391800c6f", 00:08:47.100 "is_configured": true, 00:08:47.100 "data_offset": 0, 00:08:47.100 "data_size": 65536 00:08:47.100 }, 00:08:47.100 { 00:08:47.100 "name": "BaseBdev3", 00:08:47.100 "uuid": "0c2af85e-07c9-4fb7-89c0-c3473be93665", 00:08:47.100 "is_configured": true, 00:08:47.100 "data_offset": 0, 00:08:47.100 "data_size": 65536 00:08:47.100 } 00:08:47.100 ] 00:08:47.100 }' 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.100 16:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.359 [2024-12-07 16:34:46.223045] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.359 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.618 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.618 "name": "Existed_Raid", 00:08:47.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.618 "strip_size_kb": 64, 00:08:47.618 "state": "configuring", 00:08:47.618 "raid_level": "concat", 00:08:47.618 "superblock": false, 00:08:47.618 "num_base_bdevs": 3, 00:08:47.618 "num_base_bdevs_discovered": 1, 00:08:47.618 "num_base_bdevs_operational": 3, 00:08:47.618 "base_bdevs_list": [ 00:08:47.618 { 00:08:47.618 "name": "BaseBdev1", 00:08:47.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.618 "is_configured": false, 00:08:47.618 "data_offset": 0, 00:08:47.618 "data_size": 0 00:08:47.618 }, 00:08:47.618 { 00:08:47.618 "name": null, 00:08:47.618 "uuid": "330ac9a0-3e27-4337-8cef-63e391800c6f", 00:08:47.618 "is_configured": false, 00:08:47.618 "data_offset": 0, 00:08:47.618 "data_size": 65536 00:08:47.618 }, 00:08:47.618 { 00:08:47.618 "name": "BaseBdev3", 00:08:47.618 "uuid": "0c2af85e-07c9-4fb7-89c0-c3473be93665", 00:08:47.618 "is_configured": true, 00:08:47.618 "data_offset": 0, 00:08:47.618 "data_size": 65536 00:08:47.618 } 00:08:47.618 ] 00:08:47.618 }' 00:08:47.618 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.618 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.878 [2024-12-07 16:34:46.727394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.878 BaseBdev1 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.878 [ 00:08:47.878 { 00:08:47.878 "name": "BaseBdev1", 00:08:47.878 "aliases": [ 00:08:47.878 "05b4000b-0b4e-4482-93c9-03aeb42eb58c" 00:08:47.878 ], 00:08:47.878 "product_name": "Malloc disk", 00:08:47.878 "block_size": 512, 00:08:47.878 "num_blocks": 65536, 00:08:47.878 "uuid": "05b4000b-0b4e-4482-93c9-03aeb42eb58c", 00:08:47.878 "assigned_rate_limits": { 00:08:47.878 "rw_ios_per_sec": 0, 00:08:47.878 "rw_mbytes_per_sec": 0, 00:08:47.878 "r_mbytes_per_sec": 0, 00:08:47.878 "w_mbytes_per_sec": 0 00:08:47.878 }, 00:08:47.878 "claimed": true, 00:08:47.878 "claim_type": "exclusive_write", 00:08:47.878 "zoned": false, 00:08:47.878 "supported_io_types": { 00:08:47.878 "read": true, 00:08:47.878 "write": true, 00:08:47.878 "unmap": true, 00:08:47.878 "flush": true, 00:08:47.878 "reset": true, 00:08:47.878 "nvme_admin": false, 00:08:47.878 "nvme_io": false, 00:08:47.878 "nvme_io_md": false, 00:08:47.878 "write_zeroes": true, 00:08:47.878 "zcopy": true, 00:08:47.878 "get_zone_info": false, 00:08:47.878 "zone_management": false, 00:08:47.878 "zone_append": false, 00:08:47.878 "compare": false, 00:08:47.878 "compare_and_write": false, 00:08:47.878 "abort": true, 00:08:47.878 "seek_hole": false, 00:08:47.878 "seek_data": false, 00:08:47.878 "copy": true, 00:08:47.878 "nvme_iov_md": false 00:08:47.878 }, 00:08:47.878 "memory_domains": [ 00:08:47.878 { 00:08:47.878 "dma_device_id": "system", 00:08:47.878 "dma_device_type": 1 00:08:47.878 }, 00:08:47.878 { 00:08:47.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.878 "dma_device_type": 2 00:08:47.878 } 00:08:47.878 ], 00:08:47.878 "driver_specific": {} 00:08:47.878 } 00:08:47.878 ] 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.878 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.137 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.137 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.137 "name": "Existed_Raid", 00:08:48.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.137 "strip_size_kb": 64, 00:08:48.137 "state": "configuring", 00:08:48.137 "raid_level": "concat", 00:08:48.137 "superblock": false, 00:08:48.137 "num_base_bdevs": 3, 00:08:48.137 "num_base_bdevs_discovered": 2, 00:08:48.137 "num_base_bdevs_operational": 3, 00:08:48.137 "base_bdevs_list": [ 00:08:48.137 { 00:08:48.137 "name": "BaseBdev1", 00:08:48.137 "uuid": "05b4000b-0b4e-4482-93c9-03aeb42eb58c", 00:08:48.137 "is_configured": true, 00:08:48.137 "data_offset": 0, 00:08:48.137 "data_size": 65536 00:08:48.137 }, 00:08:48.137 { 00:08:48.137 "name": null, 00:08:48.137 "uuid": "330ac9a0-3e27-4337-8cef-63e391800c6f", 00:08:48.137 "is_configured": false, 00:08:48.137 "data_offset": 0, 00:08:48.137 "data_size": 65536 00:08:48.137 }, 00:08:48.137 { 00:08:48.137 "name": "BaseBdev3", 00:08:48.137 "uuid": "0c2af85e-07c9-4fb7-89c0-c3473be93665", 00:08:48.137 "is_configured": true, 00:08:48.137 "data_offset": 0, 00:08:48.137 "data_size": 65536 00:08:48.137 } 00:08:48.137 ] 00:08:48.137 }' 00:08:48.137 16:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.137 16:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.397 [2024-12-07 16:34:47.234597] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.397 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.659 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.659 "name": "Existed_Raid", 00:08:48.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.659 "strip_size_kb": 64, 00:08:48.659 "state": "configuring", 00:08:48.659 "raid_level": "concat", 00:08:48.659 "superblock": false, 00:08:48.659 "num_base_bdevs": 3, 00:08:48.659 "num_base_bdevs_discovered": 1, 00:08:48.659 "num_base_bdevs_operational": 3, 00:08:48.659 "base_bdevs_list": [ 00:08:48.659 { 00:08:48.659 "name": "BaseBdev1", 00:08:48.659 "uuid": "05b4000b-0b4e-4482-93c9-03aeb42eb58c", 00:08:48.659 "is_configured": true, 00:08:48.659 "data_offset": 0, 00:08:48.659 "data_size": 65536 00:08:48.659 }, 00:08:48.659 { 00:08:48.660 "name": null, 00:08:48.660 "uuid": "330ac9a0-3e27-4337-8cef-63e391800c6f", 00:08:48.660 "is_configured": false, 00:08:48.660 "data_offset": 0, 00:08:48.660 "data_size": 65536 00:08:48.660 }, 00:08:48.660 { 00:08:48.660 "name": null, 00:08:48.660 "uuid": "0c2af85e-07c9-4fb7-89c0-c3473be93665", 00:08:48.660 "is_configured": false, 00:08:48.660 "data_offset": 0, 00:08:48.660 "data_size": 65536 00:08:48.660 } 00:08:48.660 ] 00:08:48.660 }' 00:08:48.660 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.660 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.930 [2024-12-07 16:34:47.713832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.930 "name": "Existed_Raid", 00:08:48.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.930 "strip_size_kb": 64, 00:08:48.930 "state": "configuring", 00:08:48.930 "raid_level": "concat", 00:08:48.930 "superblock": false, 00:08:48.930 "num_base_bdevs": 3, 00:08:48.930 "num_base_bdevs_discovered": 2, 00:08:48.930 "num_base_bdevs_operational": 3, 00:08:48.930 "base_bdevs_list": [ 00:08:48.930 { 00:08:48.930 "name": "BaseBdev1", 00:08:48.930 "uuid": "05b4000b-0b4e-4482-93c9-03aeb42eb58c", 00:08:48.930 "is_configured": true, 00:08:48.930 "data_offset": 0, 00:08:48.930 "data_size": 65536 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "name": null, 00:08:48.930 "uuid": "330ac9a0-3e27-4337-8cef-63e391800c6f", 00:08:48.930 "is_configured": false, 00:08:48.930 "data_offset": 0, 00:08:48.930 "data_size": 65536 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "name": "BaseBdev3", 00:08:48.930 "uuid": "0c2af85e-07c9-4fb7-89c0-c3473be93665", 00:08:48.930 "is_configured": true, 00:08:48.930 "data_offset": 0, 00:08:48.930 "data_size": 65536 00:08:48.930 } 00:08:48.930 ] 00:08:48.930 }' 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.930 16:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.507 [2024-12-07 16:34:48.228946] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.507 "name": "Existed_Raid", 00:08:49.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.507 "strip_size_kb": 64, 00:08:49.507 "state": "configuring", 00:08:49.507 "raid_level": "concat", 00:08:49.507 "superblock": false, 00:08:49.507 "num_base_bdevs": 3, 00:08:49.507 "num_base_bdevs_discovered": 1, 00:08:49.507 "num_base_bdevs_operational": 3, 00:08:49.507 "base_bdevs_list": [ 00:08:49.507 { 00:08:49.507 "name": null, 00:08:49.507 "uuid": "05b4000b-0b4e-4482-93c9-03aeb42eb58c", 00:08:49.507 "is_configured": false, 00:08:49.507 "data_offset": 0, 00:08:49.507 "data_size": 65536 00:08:49.507 }, 00:08:49.507 { 00:08:49.507 "name": null, 00:08:49.507 "uuid": "330ac9a0-3e27-4337-8cef-63e391800c6f", 00:08:49.507 "is_configured": false, 00:08:49.507 "data_offset": 0, 00:08:49.507 "data_size": 65536 00:08:49.507 }, 00:08:49.507 { 00:08:49.507 "name": "BaseBdev3", 00:08:49.507 "uuid": "0c2af85e-07c9-4fb7-89c0-c3473be93665", 00:08:49.507 "is_configured": true, 00:08:49.507 "data_offset": 0, 00:08:49.507 "data_size": 65536 00:08:49.507 } 00:08:49.507 ] 00:08:49.507 }' 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.507 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.077 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.078 [2024-12-07 16:34:48.735864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.078 "name": "Existed_Raid", 00:08:50.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.078 "strip_size_kb": 64, 00:08:50.078 "state": "configuring", 00:08:50.078 "raid_level": "concat", 00:08:50.078 "superblock": false, 00:08:50.078 "num_base_bdevs": 3, 00:08:50.078 "num_base_bdevs_discovered": 2, 00:08:50.078 "num_base_bdevs_operational": 3, 00:08:50.078 "base_bdevs_list": [ 00:08:50.078 { 00:08:50.078 "name": null, 00:08:50.078 "uuid": "05b4000b-0b4e-4482-93c9-03aeb42eb58c", 00:08:50.078 "is_configured": false, 00:08:50.078 "data_offset": 0, 00:08:50.078 "data_size": 65536 00:08:50.078 }, 00:08:50.078 { 00:08:50.078 "name": "BaseBdev2", 00:08:50.078 "uuid": "330ac9a0-3e27-4337-8cef-63e391800c6f", 00:08:50.078 "is_configured": true, 00:08:50.078 "data_offset": 0, 00:08:50.078 "data_size": 65536 00:08:50.078 }, 00:08:50.078 { 00:08:50.078 "name": "BaseBdev3", 00:08:50.078 "uuid": "0c2af85e-07c9-4fb7-89c0-c3473be93665", 00:08:50.078 "is_configured": true, 00:08:50.078 "data_offset": 0, 00:08:50.078 "data_size": 65536 00:08:50.078 } 00:08:50.078 ] 00:08:50.078 }' 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.078 16:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.338 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.338 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.338 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.338 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:50.338 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.338 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:50.338 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.338 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.338 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.338 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:50.338 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 05b4000b-0b4e-4482-93c9-03aeb42eb58c 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.600 [2024-12-07 16:34:49.283856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:50.600 [2024-12-07 16:34:49.283961] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:50.600 [2024-12-07 16:34:49.283990] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:50.600 [2024-12-07 16:34:49.284307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:50.600 [2024-12-07 16:34:49.284516] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:50.600 [2024-12-07 16:34:49.284560] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:50.600 [2024-12-07 16:34:49.284811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.600 NewBaseBdev 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.600 [ 00:08:50.600 { 00:08:50.600 "name": "NewBaseBdev", 00:08:50.600 "aliases": [ 00:08:50.600 "05b4000b-0b4e-4482-93c9-03aeb42eb58c" 00:08:50.600 ], 00:08:50.600 "product_name": "Malloc disk", 00:08:50.600 "block_size": 512, 00:08:50.600 "num_blocks": 65536, 00:08:50.600 "uuid": "05b4000b-0b4e-4482-93c9-03aeb42eb58c", 00:08:50.600 "assigned_rate_limits": { 00:08:50.600 "rw_ios_per_sec": 0, 00:08:50.600 "rw_mbytes_per_sec": 0, 00:08:50.600 "r_mbytes_per_sec": 0, 00:08:50.600 "w_mbytes_per_sec": 0 00:08:50.600 }, 00:08:50.600 "claimed": true, 00:08:50.600 "claim_type": "exclusive_write", 00:08:50.600 "zoned": false, 00:08:50.600 "supported_io_types": { 00:08:50.600 "read": true, 00:08:50.600 "write": true, 00:08:50.600 "unmap": true, 00:08:50.600 "flush": true, 00:08:50.600 "reset": true, 00:08:50.600 "nvme_admin": false, 00:08:50.600 "nvme_io": false, 00:08:50.600 "nvme_io_md": false, 00:08:50.600 "write_zeroes": true, 00:08:50.600 "zcopy": true, 00:08:50.600 "get_zone_info": false, 00:08:50.600 "zone_management": false, 00:08:50.600 "zone_append": false, 00:08:50.600 "compare": false, 00:08:50.600 "compare_and_write": false, 00:08:50.600 "abort": true, 00:08:50.600 "seek_hole": false, 00:08:50.600 "seek_data": false, 00:08:50.600 "copy": true, 00:08:50.600 "nvme_iov_md": false 00:08:50.600 }, 00:08:50.600 "memory_domains": [ 00:08:50.600 { 00:08:50.600 "dma_device_id": "system", 00:08:50.600 "dma_device_type": 1 00:08:50.600 }, 00:08:50.600 { 00:08:50.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.600 "dma_device_type": 2 00:08:50.600 } 00:08:50.600 ], 00:08:50.600 "driver_specific": {} 00:08:50.600 } 00:08:50.600 ] 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.600 "name": "Existed_Raid", 00:08:50.600 "uuid": "a40329cb-e2d5-45e5-b0db-01333955a6a6", 00:08:50.600 "strip_size_kb": 64, 00:08:50.600 "state": "online", 00:08:50.600 "raid_level": "concat", 00:08:50.600 "superblock": false, 00:08:50.600 "num_base_bdevs": 3, 00:08:50.600 "num_base_bdevs_discovered": 3, 00:08:50.600 "num_base_bdevs_operational": 3, 00:08:50.600 "base_bdevs_list": [ 00:08:50.600 { 00:08:50.600 "name": "NewBaseBdev", 00:08:50.600 "uuid": "05b4000b-0b4e-4482-93c9-03aeb42eb58c", 00:08:50.600 "is_configured": true, 00:08:50.600 "data_offset": 0, 00:08:50.600 "data_size": 65536 00:08:50.600 }, 00:08:50.600 { 00:08:50.600 "name": "BaseBdev2", 00:08:50.600 "uuid": "330ac9a0-3e27-4337-8cef-63e391800c6f", 00:08:50.600 "is_configured": true, 00:08:50.600 "data_offset": 0, 00:08:50.600 "data_size": 65536 00:08:50.600 }, 00:08:50.600 { 00:08:50.600 "name": "BaseBdev3", 00:08:50.600 "uuid": "0c2af85e-07c9-4fb7-89c0-c3473be93665", 00:08:50.600 "is_configured": true, 00:08:50.600 "data_offset": 0, 00:08:50.600 "data_size": 65536 00:08:50.600 } 00:08:50.600 ] 00:08:50.600 }' 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.600 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.169 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:51.169 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:51.169 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.169 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.169 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.169 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.169 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.169 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:51.169 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.169 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.169 [2024-12-07 16:34:49.811397] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.169 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.169 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.169 "name": "Existed_Raid", 00:08:51.169 "aliases": [ 00:08:51.169 "a40329cb-e2d5-45e5-b0db-01333955a6a6" 00:08:51.169 ], 00:08:51.169 "product_name": "Raid Volume", 00:08:51.169 "block_size": 512, 00:08:51.169 "num_blocks": 196608, 00:08:51.169 "uuid": "a40329cb-e2d5-45e5-b0db-01333955a6a6", 00:08:51.169 "assigned_rate_limits": { 00:08:51.169 "rw_ios_per_sec": 0, 00:08:51.169 "rw_mbytes_per_sec": 0, 00:08:51.169 "r_mbytes_per_sec": 0, 00:08:51.170 "w_mbytes_per_sec": 0 00:08:51.170 }, 00:08:51.170 "claimed": false, 00:08:51.170 "zoned": false, 00:08:51.170 "supported_io_types": { 00:08:51.170 "read": true, 00:08:51.170 "write": true, 00:08:51.170 "unmap": true, 00:08:51.170 "flush": true, 00:08:51.170 "reset": true, 00:08:51.170 "nvme_admin": false, 00:08:51.170 "nvme_io": false, 00:08:51.170 "nvme_io_md": false, 00:08:51.170 "write_zeroes": true, 00:08:51.170 "zcopy": false, 00:08:51.170 "get_zone_info": false, 00:08:51.170 "zone_management": false, 00:08:51.170 "zone_append": false, 00:08:51.170 "compare": false, 00:08:51.170 "compare_and_write": false, 00:08:51.170 "abort": false, 00:08:51.170 "seek_hole": false, 00:08:51.170 "seek_data": false, 00:08:51.170 "copy": false, 00:08:51.170 "nvme_iov_md": false 00:08:51.170 }, 00:08:51.170 "memory_domains": [ 00:08:51.170 { 00:08:51.170 "dma_device_id": "system", 00:08:51.170 "dma_device_type": 1 00:08:51.170 }, 00:08:51.170 { 00:08:51.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.170 "dma_device_type": 2 00:08:51.170 }, 00:08:51.170 { 00:08:51.170 "dma_device_id": "system", 00:08:51.170 "dma_device_type": 1 00:08:51.170 }, 00:08:51.170 { 00:08:51.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.170 "dma_device_type": 2 00:08:51.170 }, 00:08:51.170 { 00:08:51.170 "dma_device_id": "system", 00:08:51.170 "dma_device_type": 1 00:08:51.170 }, 00:08:51.170 { 00:08:51.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.170 "dma_device_type": 2 00:08:51.170 } 00:08:51.170 ], 00:08:51.170 "driver_specific": { 00:08:51.170 "raid": { 00:08:51.170 "uuid": "a40329cb-e2d5-45e5-b0db-01333955a6a6", 00:08:51.170 "strip_size_kb": 64, 00:08:51.170 "state": "online", 00:08:51.170 "raid_level": "concat", 00:08:51.170 "superblock": false, 00:08:51.170 "num_base_bdevs": 3, 00:08:51.170 "num_base_bdevs_discovered": 3, 00:08:51.170 "num_base_bdevs_operational": 3, 00:08:51.170 "base_bdevs_list": [ 00:08:51.170 { 00:08:51.170 "name": "NewBaseBdev", 00:08:51.170 "uuid": "05b4000b-0b4e-4482-93c9-03aeb42eb58c", 00:08:51.170 "is_configured": true, 00:08:51.170 "data_offset": 0, 00:08:51.170 "data_size": 65536 00:08:51.170 }, 00:08:51.170 { 00:08:51.170 "name": "BaseBdev2", 00:08:51.170 "uuid": "330ac9a0-3e27-4337-8cef-63e391800c6f", 00:08:51.170 "is_configured": true, 00:08:51.170 "data_offset": 0, 00:08:51.170 "data_size": 65536 00:08:51.170 }, 00:08:51.170 { 00:08:51.170 "name": "BaseBdev3", 00:08:51.170 "uuid": "0c2af85e-07c9-4fb7-89c0-c3473be93665", 00:08:51.170 "is_configured": true, 00:08:51.170 "data_offset": 0, 00:08:51.170 "data_size": 65536 00:08:51.170 } 00:08:51.170 ] 00:08:51.170 } 00:08:51.170 } 00:08:51.170 }' 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:51.170 BaseBdev2 00:08:51.170 BaseBdev3' 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.170 16:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.170 16:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.170 16:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.170 16:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.170 16:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:51.170 16:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.170 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.170 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.170 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.430 [2024-12-07 16:34:50.070592] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:51.430 [2024-12-07 16:34:50.070659] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.430 [2024-12-07 16:34:50.070765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.430 [2024-12-07 16:34:50.070871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.430 [2024-12-07 16:34:50.070950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77008 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 77008 ']' 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 77008 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77008 00:08:51.430 killing process with pid 77008 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77008' 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 77008 00:08:51.430 [2024-12-07 16:34:50.121433] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.430 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 77008 00:08:51.430 [2024-12-07 16:34:50.181732] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.691 16:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:51.691 00:08:51.691 real 0m9.369s 00:08:51.691 user 0m15.681s 00:08:51.691 sys 0m2.003s 00:08:51.691 ************************************ 00:08:51.691 END TEST raid_state_function_test 00:08:51.691 ************************************ 00:08:51.691 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.691 16:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.951 16:34:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:51.951 16:34:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:51.951 16:34:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.951 16:34:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.951 ************************************ 00:08:51.951 START TEST raid_state_function_test_sb 00:08:51.951 ************************************ 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77618 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77618' 00:08:51.951 Process raid pid: 77618 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77618 00:08:51.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77618 ']' 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.951 16:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.951 [2024-12-07 16:34:50.726959] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:51.951 [2024-12-07 16:34:50.727085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.211 [2024-12-07 16:34:50.892246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.211 [2024-12-07 16:34:50.966396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.211 [2024-12-07 16:34:51.043040] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.211 [2024-12-07 16:34:51.043086] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.781 [2024-12-07 16:34:51.563149] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.781 [2024-12-07 16:34:51.563258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.781 [2024-12-07 16:34:51.563295] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.781 [2024-12-07 16:34:51.563321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.781 [2024-12-07 16:34:51.563350] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:52.781 [2024-12-07 16:34:51.563379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.781 "name": "Existed_Raid", 00:08:52.781 "uuid": "c11a5623-82cb-4050-ae06-0acbe7c80f71", 00:08:52.781 "strip_size_kb": 64, 00:08:52.781 "state": "configuring", 00:08:52.781 "raid_level": "concat", 00:08:52.781 "superblock": true, 00:08:52.781 "num_base_bdevs": 3, 00:08:52.781 "num_base_bdevs_discovered": 0, 00:08:52.781 "num_base_bdevs_operational": 3, 00:08:52.781 "base_bdevs_list": [ 00:08:52.781 { 00:08:52.781 "name": "BaseBdev1", 00:08:52.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.781 "is_configured": false, 00:08:52.781 "data_offset": 0, 00:08:52.781 "data_size": 0 00:08:52.781 }, 00:08:52.781 { 00:08:52.781 "name": "BaseBdev2", 00:08:52.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.781 "is_configured": false, 00:08:52.781 "data_offset": 0, 00:08:52.781 "data_size": 0 00:08:52.781 }, 00:08:52.781 { 00:08:52.781 "name": "BaseBdev3", 00:08:52.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.781 "is_configured": false, 00:08:52.781 "data_offset": 0, 00:08:52.781 "data_size": 0 00:08:52.781 } 00:08:52.781 ] 00:08:52.781 }' 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.781 16:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.351 [2024-12-07 16:34:52.034252] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.351 [2024-12-07 16:34:52.034356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.351 [2024-12-07 16:34:52.042279] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.351 [2024-12-07 16:34:52.042366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.351 [2024-12-07 16:34:52.042395] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.351 [2024-12-07 16:34:52.042420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.351 [2024-12-07 16:34:52.042439] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.351 [2024-12-07 16:34:52.042461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.351 [2024-12-07 16:34:52.065605] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.351 BaseBdev1 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.351 [ 00:08:53.351 { 00:08:53.351 "name": "BaseBdev1", 00:08:53.351 "aliases": [ 00:08:53.351 "3d48cfba-21e2-4b8a-a2b2-c088e933a49e" 00:08:53.351 ], 00:08:53.351 "product_name": "Malloc disk", 00:08:53.351 "block_size": 512, 00:08:53.351 "num_blocks": 65536, 00:08:53.351 "uuid": "3d48cfba-21e2-4b8a-a2b2-c088e933a49e", 00:08:53.351 "assigned_rate_limits": { 00:08:53.351 "rw_ios_per_sec": 0, 00:08:53.351 "rw_mbytes_per_sec": 0, 00:08:53.351 "r_mbytes_per_sec": 0, 00:08:53.351 "w_mbytes_per_sec": 0 00:08:53.351 }, 00:08:53.351 "claimed": true, 00:08:53.351 "claim_type": "exclusive_write", 00:08:53.351 "zoned": false, 00:08:53.351 "supported_io_types": { 00:08:53.351 "read": true, 00:08:53.351 "write": true, 00:08:53.351 "unmap": true, 00:08:53.351 "flush": true, 00:08:53.351 "reset": true, 00:08:53.351 "nvme_admin": false, 00:08:53.351 "nvme_io": false, 00:08:53.351 "nvme_io_md": false, 00:08:53.351 "write_zeroes": true, 00:08:53.351 "zcopy": true, 00:08:53.351 "get_zone_info": false, 00:08:53.351 "zone_management": false, 00:08:53.351 "zone_append": false, 00:08:53.351 "compare": false, 00:08:53.351 "compare_and_write": false, 00:08:53.351 "abort": true, 00:08:53.351 "seek_hole": false, 00:08:53.351 "seek_data": false, 00:08:53.351 "copy": true, 00:08:53.351 "nvme_iov_md": false 00:08:53.351 }, 00:08:53.351 "memory_domains": [ 00:08:53.351 { 00:08:53.351 "dma_device_id": "system", 00:08:53.351 "dma_device_type": 1 00:08:53.351 }, 00:08:53.351 { 00:08:53.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.351 "dma_device_type": 2 00:08:53.351 } 00:08:53.351 ], 00:08:53.351 "driver_specific": {} 00:08:53.351 } 00:08:53.351 ] 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.351 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.352 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.352 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.352 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.352 "name": "Existed_Raid", 00:08:53.352 "uuid": "57f84054-2366-47a9-b8b1-24e9a529d99c", 00:08:53.352 "strip_size_kb": 64, 00:08:53.352 "state": "configuring", 00:08:53.352 "raid_level": "concat", 00:08:53.352 "superblock": true, 00:08:53.352 "num_base_bdevs": 3, 00:08:53.352 "num_base_bdevs_discovered": 1, 00:08:53.352 "num_base_bdevs_operational": 3, 00:08:53.352 "base_bdevs_list": [ 00:08:53.352 { 00:08:53.352 "name": "BaseBdev1", 00:08:53.352 "uuid": "3d48cfba-21e2-4b8a-a2b2-c088e933a49e", 00:08:53.352 "is_configured": true, 00:08:53.352 "data_offset": 2048, 00:08:53.352 "data_size": 63488 00:08:53.352 }, 00:08:53.352 { 00:08:53.352 "name": "BaseBdev2", 00:08:53.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.352 "is_configured": false, 00:08:53.352 "data_offset": 0, 00:08:53.352 "data_size": 0 00:08:53.352 }, 00:08:53.352 { 00:08:53.352 "name": "BaseBdev3", 00:08:53.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.352 "is_configured": false, 00:08:53.352 "data_offset": 0, 00:08:53.352 "data_size": 0 00:08:53.352 } 00:08:53.352 ] 00:08:53.352 }' 00:08:53.352 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.352 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.921 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.921 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 [2024-12-07 16:34:52.520914] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.921 [2024-12-07 16:34:52.521011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:53.921 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.921 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.921 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.921 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 [2024-12-07 16:34:52.532901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.921 [2024-12-07 16:34:52.535138] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.921 [2024-12-07 16:34:52.535215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.921 [2024-12-07 16:34:52.535229] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.921 [2024-12-07 16:34:52.535240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.921 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.921 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:53.921 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.921 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.921 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.921 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.922 "name": "Existed_Raid", 00:08:53.922 "uuid": "ecfe03b7-d256-4dc3-a5b2-86bd1ed2ebc5", 00:08:53.922 "strip_size_kb": 64, 00:08:53.922 "state": "configuring", 00:08:53.922 "raid_level": "concat", 00:08:53.922 "superblock": true, 00:08:53.922 "num_base_bdevs": 3, 00:08:53.922 "num_base_bdevs_discovered": 1, 00:08:53.922 "num_base_bdevs_operational": 3, 00:08:53.922 "base_bdevs_list": [ 00:08:53.922 { 00:08:53.922 "name": "BaseBdev1", 00:08:53.922 "uuid": "3d48cfba-21e2-4b8a-a2b2-c088e933a49e", 00:08:53.922 "is_configured": true, 00:08:53.922 "data_offset": 2048, 00:08:53.922 "data_size": 63488 00:08:53.922 }, 00:08:53.922 { 00:08:53.922 "name": "BaseBdev2", 00:08:53.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.922 "is_configured": false, 00:08:53.922 "data_offset": 0, 00:08:53.922 "data_size": 0 00:08:53.922 }, 00:08:53.922 { 00:08:53.922 "name": "BaseBdev3", 00:08:53.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.922 "is_configured": false, 00:08:53.922 "data_offset": 0, 00:08:53.922 "data_size": 0 00:08:53.922 } 00:08:53.922 ] 00:08:53.922 }' 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.922 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 16:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:54.182 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.182 16:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 [2024-12-07 16:34:53.003386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.182 BaseBdev2 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.182 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 [ 00:08:54.182 { 00:08:54.182 "name": "BaseBdev2", 00:08:54.182 "aliases": [ 00:08:54.182 "7e03ed69-9185-4404-b243-5391973520df" 00:08:54.182 ], 00:08:54.182 "product_name": "Malloc disk", 00:08:54.182 "block_size": 512, 00:08:54.182 "num_blocks": 65536, 00:08:54.182 "uuid": "7e03ed69-9185-4404-b243-5391973520df", 00:08:54.182 "assigned_rate_limits": { 00:08:54.182 "rw_ios_per_sec": 0, 00:08:54.182 "rw_mbytes_per_sec": 0, 00:08:54.182 "r_mbytes_per_sec": 0, 00:08:54.182 "w_mbytes_per_sec": 0 00:08:54.182 }, 00:08:54.182 "claimed": true, 00:08:54.182 "claim_type": "exclusive_write", 00:08:54.182 "zoned": false, 00:08:54.182 "supported_io_types": { 00:08:54.182 "read": true, 00:08:54.182 "write": true, 00:08:54.182 "unmap": true, 00:08:54.182 "flush": true, 00:08:54.182 "reset": true, 00:08:54.182 "nvme_admin": false, 00:08:54.182 "nvme_io": false, 00:08:54.182 "nvme_io_md": false, 00:08:54.183 "write_zeroes": true, 00:08:54.183 "zcopy": true, 00:08:54.183 "get_zone_info": false, 00:08:54.183 "zone_management": false, 00:08:54.183 "zone_append": false, 00:08:54.183 "compare": false, 00:08:54.183 "compare_and_write": false, 00:08:54.183 "abort": true, 00:08:54.183 "seek_hole": false, 00:08:54.183 "seek_data": false, 00:08:54.183 "copy": true, 00:08:54.183 "nvme_iov_md": false 00:08:54.183 }, 00:08:54.183 "memory_domains": [ 00:08:54.183 { 00:08:54.183 "dma_device_id": "system", 00:08:54.183 "dma_device_type": 1 00:08:54.183 }, 00:08:54.183 { 00:08:54.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.183 "dma_device_type": 2 00:08:54.183 } 00:08:54.183 ], 00:08:54.183 "driver_specific": {} 00:08:54.183 } 00:08:54.183 ] 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.183 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.442 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.442 "name": "Existed_Raid", 00:08:54.442 "uuid": "ecfe03b7-d256-4dc3-a5b2-86bd1ed2ebc5", 00:08:54.442 "strip_size_kb": 64, 00:08:54.442 "state": "configuring", 00:08:54.442 "raid_level": "concat", 00:08:54.442 "superblock": true, 00:08:54.442 "num_base_bdevs": 3, 00:08:54.442 "num_base_bdevs_discovered": 2, 00:08:54.442 "num_base_bdevs_operational": 3, 00:08:54.442 "base_bdevs_list": [ 00:08:54.442 { 00:08:54.442 "name": "BaseBdev1", 00:08:54.442 "uuid": "3d48cfba-21e2-4b8a-a2b2-c088e933a49e", 00:08:54.442 "is_configured": true, 00:08:54.442 "data_offset": 2048, 00:08:54.442 "data_size": 63488 00:08:54.442 }, 00:08:54.442 { 00:08:54.442 "name": "BaseBdev2", 00:08:54.442 "uuid": "7e03ed69-9185-4404-b243-5391973520df", 00:08:54.442 "is_configured": true, 00:08:54.442 "data_offset": 2048, 00:08:54.442 "data_size": 63488 00:08:54.442 }, 00:08:54.442 { 00:08:54.442 "name": "BaseBdev3", 00:08:54.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.442 "is_configured": false, 00:08:54.442 "data_offset": 0, 00:08:54.442 "data_size": 0 00:08:54.442 } 00:08:54.442 ] 00:08:54.442 }' 00:08:54.442 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.442 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.702 [2024-12-07 16:34:53.479470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.702 [2024-12-07 16:34:53.479783] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:54.702 [2024-12-07 16:34:53.479851] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.702 BaseBdev3 00:08:54.702 [2024-12-07 16:34:53.480232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:54.702 [2024-12-07 16:34:53.480385] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:54.702 [2024-12-07 16:34:53.480435] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:54.702 [2024-12-07 16:34:53.480583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.702 [ 00:08:54.702 { 00:08:54.702 "name": "BaseBdev3", 00:08:54.702 "aliases": [ 00:08:54.702 "45e4d3e6-77ba-4897-82ab-23395d354046" 00:08:54.702 ], 00:08:54.702 "product_name": "Malloc disk", 00:08:54.702 "block_size": 512, 00:08:54.702 "num_blocks": 65536, 00:08:54.702 "uuid": "45e4d3e6-77ba-4897-82ab-23395d354046", 00:08:54.702 "assigned_rate_limits": { 00:08:54.702 "rw_ios_per_sec": 0, 00:08:54.702 "rw_mbytes_per_sec": 0, 00:08:54.702 "r_mbytes_per_sec": 0, 00:08:54.702 "w_mbytes_per_sec": 0 00:08:54.702 }, 00:08:54.702 "claimed": true, 00:08:54.702 "claim_type": "exclusive_write", 00:08:54.702 "zoned": false, 00:08:54.702 "supported_io_types": { 00:08:54.702 "read": true, 00:08:54.702 "write": true, 00:08:54.702 "unmap": true, 00:08:54.702 "flush": true, 00:08:54.702 "reset": true, 00:08:54.702 "nvme_admin": false, 00:08:54.702 "nvme_io": false, 00:08:54.702 "nvme_io_md": false, 00:08:54.702 "write_zeroes": true, 00:08:54.702 "zcopy": true, 00:08:54.702 "get_zone_info": false, 00:08:54.702 "zone_management": false, 00:08:54.702 "zone_append": false, 00:08:54.702 "compare": false, 00:08:54.702 "compare_and_write": false, 00:08:54.702 "abort": true, 00:08:54.702 "seek_hole": false, 00:08:54.702 "seek_data": false, 00:08:54.702 "copy": true, 00:08:54.702 "nvme_iov_md": false 00:08:54.702 }, 00:08:54.702 "memory_domains": [ 00:08:54.702 { 00:08:54.702 "dma_device_id": "system", 00:08:54.702 "dma_device_type": 1 00:08:54.702 }, 00:08:54.702 { 00:08:54.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.702 "dma_device_type": 2 00:08:54.702 } 00:08:54.702 ], 00:08:54.702 "driver_specific": {} 00:08:54.702 } 00:08:54.702 ] 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.702 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.703 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.703 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.703 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.703 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.703 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.703 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.703 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.703 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.703 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.703 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.703 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.703 "name": "Existed_Raid", 00:08:54.703 "uuid": "ecfe03b7-d256-4dc3-a5b2-86bd1ed2ebc5", 00:08:54.703 "strip_size_kb": 64, 00:08:54.703 "state": "online", 00:08:54.703 "raid_level": "concat", 00:08:54.703 "superblock": true, 00:08:54.703 "num_base_bdevs": 3, 00:08:54.703 "num_base_bdevs_discovered": 3, 00:08:54.703 "num_base_bdevs_operational": 3, 00:08:54.703 "base_bdevs_list": [ 00:08:54.703 { 00:08:54.703 "name": "BaseBdev1", 00:08:54.703 "uuid": "3d48cfba-21e2-4b8a-a2b2-c088e933a49e", 00:08:54.703 "is_configured": true, 00:08:54.703 "data_offset": 2048, 00:08:54.703 "data_size": 63488 00:08:54.703 }, 00:08:54.703 { 00:08:54.703 "name": "BaseBdev2", 00:08:54.703 "uuid": "7e03ed69-9185-4404-b243-5391973520df", 00:08:54.703 "is_configured": true, 00:08:54.703 "data_offset": 2048, 00:08:54.703 "data_size": 63488 00:08:54.703 }, 00:08:54.703 { 00:08:54.703 "name": "BaseBdev3", 00:08:54.703 "uuid": "45e4d3e6-77ba-4897-82ab-23395d354046", 00:08:54.703 "is_configured": true, 00:08:54.703 "data_offset": 2048, 00:08:54.703 "data_size": 63488 00:08:54.703 } 00:08:54.703 ] 00:08:54.703 }' 00:08:54.703 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.703 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.270 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.270 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.270 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.270 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.270 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.270 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.271 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.271 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.271 16:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.271 16:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.271 [2024-12-07 16:34:53.999045] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.271 "name": "Existed_Raid", 00:08:55.271 "aliases": [ 00:08:55.271 "ecfe03b7-d256-4dc3-a5b2-86bd1ed2ebc5" 00:08:55.271 ], 00:08:55.271 "product_name": "Raid Volume", 00:08:55.271 "block_size": 512, 00:08:55.271 "num_blocks": 190464, 00:08:55.271 "uuid": "ecfe03b7-d256-4dc3-a5b2-86bd1ed2ebc5", 00:08:55.271 "assigned_rate_limits": { 00:08:55.271 "rw_ios_per_sec": 0, 00:08:55.271 "rw_mbytes_per_sec": 0, 00:08:55.271 "r_mbytes_per_sec": 0, 00:08:55.271 "w_mbytes_per_sec": 0 00:08:55.271 }, 00:08:55.271 "claimed": false, 00:08:55.271 "zoned": false, 00:08:55.271 "supported_io_types": { 00:08:55.271 "read": true, 00:08:55.271 "write": true, 00:08:55.271 "unmap": true, 00:08:55.271 "flush": true, 00:08:55.271 "reset": true, 00:08:55.271 "nvme_admin": false, 00:08:55.271 "nvme_io": false, 00:08:55.271 "nvme_io_md": false, 00:08:55.271 "write_zeroes": true, 00:08:55.271 "zcopy": false, 00:08:55.271 "get_zone_info": false, 00:08:55.271 "zone_management": false, 00:08:55.271 "zone_append": false, 00:08:55.271 "compare": false, 00:08:55.271 "compare_and_write": false, 00:08:55.271 "abort": false, 00:08:55.271 "seek_hole": false, 00:08:55.271 "seek_data": false, 00:08:55.271 "copy": false, 00:08:55.271 "nvme_iov_md": false 00:08:55.271 }, 00:08:55.271 "memory_domains": [ 00:08:55.271 { 00:08:55.271 "dma_device_id": "system", 00:08:55.271 "dma_device_type": 1 00:08:55.271 }, 00:08:55.271 { 00:08:55.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.271 "dma_device_type": 2 00:08:55.271 }, 00:08:55.271 { 00:08:55.271 "dma_device_id": "system", 00:08:55.271 "dma_device_type": 1 00:08:55.271 }, 00:08:55.271 { 00:08:55.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.271 "dma_device_type": 2 00:08:55.271 }, 00:08:55.271 { 00:08:55.271 "dma_device_id": "system", 00:08:55.271 "dma_device_type": 1 00:08:55.271 }, 00:08:55.271 { 00:08:55.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.271 "dma_device_type": 2 00:08:55.271 } 00:08:55.271 ], 00:08:55.271 "driver_specific": { 00:08:55.271 "raid": { 00:08:55.271 "uuid": "ecfe03b7-d256-4dc3-a5b2-86bd1ed2ebc5", 00:08:55.271 "strip_size_kb": 64, 00:08:55.271 "state": "online", 00:08:55.271 "raid_level": "concat", 00:08:55.271 "superblock": true, 00:08:55.271 "num_base_bdevs": 3, 00:08:55.271 "num_base_bdevs_discovered": 3, 00:08:55.271 "num_base_bdevs_operational": 3, 00:08:55.271 "base_bdevs_list": [ 00:08:55.271 { 00:08:55.271 "name": "BaseBdev1", 00:08:55.271 "uuid": "3d48cfba-21e2-4b8a-a2b2-c088e933a49e", 00:08:55.271 "is_configured": true, 00:08:55.271 "data_offset": 2048, 00:08:55.271 "data_size": 63488 00:08:55.271 }, 00:08:55.271 { 00:08:55.271 "name": "BaseBdev2", 00:08:55.271 "uuid": "7e03ed69-9185-4404-b243-5391973520df", 00:08:55.271 "is_configured": true, 00:08:55.271 "data_offset": 2048, 00:08:55.271 "data_size": 63488 00:08:55.271 }, 00:08:55.271 { 00:08:55.271 "name": "BaseBdev3", 00:08:55.271 "uuid": "45e4d3e6-77ba-4897-82ab-23395d354046", 00:08:55.271 "is_configured": true, 00:08:55.271 "data_offset": 2048, 00:08:55.271 "data_size": 63488 00:08:55.271 } 00:08:55.271 ] 00:08:55.271 } 00:08:55.271 } 00:08:55.271 }' 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:55.271 BaseBdev2 00:08:55.271 BaseBdev3' 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.271 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.568 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.568 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.568 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.568 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.569 [2024-12-07 16:34:54.254340] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.569 [2024-12-07 16:34:54.254422] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.569 [2024-12-07 16:34:54.254508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.569 "name": "Existed_Raid", 00:08:55.569 "uuid": "ecfe03b7-d256-4dc3-a5b2-86bd1ed2ebc5", 00:08:55.569 "strip_size_kb": 64, 00:08:55.569 "state": "offline", 00:08:55.569 "raid_level": "concat", 00:08:55.569 "superblock": true, 00:08:55.569 "num_base_bdevs": 3, 00:08:55.569 "num_base_bdevs_discovered": 2, 00:08:55.569 "num_base_bdevs_operational": 2, 00:08:55.569 "base_bdevs_list": [ 00:08:55.569 { 00:08:55.569 "name": null, 00:08:55.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.569 "is_configured": false, 00:08:55.569 "data_offset": 0, 00:08:55.569 "data_size": 63488 00:08:55.569 }, 00:08:55.569 { 00:08:55.569 "name": "BaseBdev2", 00:08:55.569 "uuid": "7e03ed69-9185-4404-b243-5391973520df", 00:08:55.569 "is_configured": true, 00:08:55.569 "data_offset": 2048, 00:08:55.569 "data_size": 63488 00:08:55.569 }, 00:08:55.569 { 00:08:55.569 "name": "BaseBdev3", 00:08:55.569 "uuid": "45e4d3e6-77ba-4897-82ab-23395d354046", 00:08:55.569 "is_configured": true, 00:08:55.569 "data_offset": 2048, 00:08:55.569 "data_size": 63488 00:08:55.569 } 00:08:55.569 ] 00:08:55.569 }' 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.569 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.138 [2024-12-07 16:34:54.794159] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.138 [2024-12-07 16:34:54.862540] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:56.138 [2024-12-07 16:34:54.862633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.138 BaseBdev2 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:56.138 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.139 [ 00:08:56.139 { 00:08:56.139 "name": "BaseBdev2", 00:08:56.139 "aliases": [ 00:08:56.139 "39fa3282-808f-4775-9ce9-d27eb7544fb2" 00:08:56.139 ], 00:08:56.139 "product_name": "Malloc disk", 00:08:56.139 "block_size": 512, 00:08:56.139 "num_blocks": 65536, 00:08:56.139 "uuid": "39fa3282-808f-4775-9ce9-d27eb7544fb2", 00:08:56.139 "assigned_rate_limits": { 00:08:56.139 "rw_ios_per_sec": 0, 00:08:56.139 "rw_mbytes_per_sec": 0, 00:08:56.139 "r_mbytes_per_sec": 0, 00:08:56.139 "w_mbytes_per_sec": 0 00:08:56.139 }, 00:08:56.139 "claimed": false, 00:08:56.139 "zoned": false, 00:08:56.139 "supported_io_types": { 00:08:56.139 "read": true, 00:08:56.139 "write": true, 00:08:56.139 "unmap": true, 00:08:56.139 "flush": true, 00:08:56.139 "reset": true, 00:08:56.139 "nvme_admin": false, 00:08:56.139 "nvme_io": false, 00:08:56.139 "nvme_io_md": false, 00:08:56.139 "write_zeroes": true, 00:08:56.139 "zcopy": true, 00:08:56.139 "get_zone_info": false, 00:08:56.139 "zone_management": false, 00:08:56.139 "zone_append": false, 00:08:56.139 "compare": false, 00:08:56.139 "compare_and_write": false, 00:08:56.139 "abort": true, 00:08:56.139 "seek_hole": false, 00:08:56.139 "seek_data": false, 00:08:56.139 "copy": true, 00:08:56.139 "nvme_iov_md": false 00:08:56.139 }, 00:08:56.139 "memory_domains": [ 00:08:56.139 { 00:08:56.139 "dma_device_id": "system", 00:08:56.139 "dma_device_type": 1 00:08:56.139 }, 00:08:56.139 { 00:08:56.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.139 "dma_device_type": 2 00:08:56.139 } 00:08:56.139 ], 00:08:56.139 "driver_specific": {} 00:08:56.139 } 00:08:56.139 ] 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.139 16:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.139 BaseBdev3 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.139 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.398 [ 00:08:56.398 { 00:08:56.398 "name": "BaseBdev3", 00:08:56.398 "aliases": [ 00:08:56.398 "0550c6d8-c2f6-4cb2-ab55-f61851d921b0" 00:08:56.398 ], 00:08:56.398 "product_name": "Malloc disk", 00:08:56.398 "block_size": 512, 00:08:56.398 "num_blocks": 65536, 00:08:56.398 "uuid": "0550c6d8-c2f6-4cb2-ab55-f61851d921b0", 00:08:56.398 "assigned_rate_limits": { 00:08:56.398 "rw_ios_per_sec": 0, 00:08:56.398 "rw_mbytes_per_sec": 0, 00:08:56.398 "r_mbytes_per_sec": 0, 00:08:56.398 "w_mbytes_per_sec": 0 00:08:56.398 }, 00:08:56.398 "claimed": false, 00:08:56.398 "zoned": false, 00:08:56.398 "supported_io_types": { 00:08:56.398 "read": true, 00:08:56.398 "write": true, 00:08:56.398 "unmap": true, 00:08:56.398 "flush": true, 00:08:56.398 "reset": true, 00:08:56.398 "nvme_admin": false, 00:08:56.398 "nvme_io": false, 00:08:56.398 "nvme_io_md": false, 00:08:56.398 "write_zeroes": true, 00:08:56.398 "zcopy": true, 00:08:56.398 "get_zone_info": false, 00:08:56.398 "zone_management": false, 00:08:56.398 "zone_append": false, 00:08:56.398 "compare": false, 00:08:56.398 "compare_and_write": false, 00:08:56.398 "abort": true, 00:08:56.398 "seek_hole": false, 00:08:56.398 "seek_data": false, 00:08:56.398 "copy": true, 00:08:56.398 "nvme_iov_md": false 00:08:56.398 }, 00:08:56.398 "memory_domains": [ 00:08:56.398 { 00:08:56.398 "dma_device_id": "system", 00:08:56.398 "dma_device_type": 1 00:08:56.398 }, 00:08:56.398 { 00:08:56.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.398 "dma_device_type": 2 00:08:56.398 } 00:08:56.398 ], 00:08:56.398 "driver_specific": {} 00:08:56.398 } 00:08:56.398 ] 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.398 [2024-12-07 16:34:55.054623] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.398 [2024-12-07 16:34:55.054708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.398 [2024-12-07 16:34:55.054751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.398 [2024-12-07 16:34:55.056879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.398 "name": "Existed_Raid", 00:08:56.398 "uuid": "f99a98ae-37f1-4b16-951e-5b317e500bd5", 00:08:56.398 "strip_size_kb": 64, 00:08:56.398 "state": "configuring", 00:08:56.398 "raid_level": "concat", 00:08:56.398 "superblock": true, 00:08:56.398 "num_base_bdevs": 3, 00:08:56.398 "num_base_bdevs_discovered": 2, 00:08:56.398 "num_base_bdevs_operational": 3, 00:08:56.398 "base_bdevs_list": [ 00:08:56.398 { 00:08:56.398 "name": "BaseBdev1", 00:08:56.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.398 "is_configured": false, 00:08:56.398 "data_offset": 0, 00:08:56.398 "data_size": 0 00:08:56.398 }, 00:08:56.398 { 00:08:56.398 "name": "BaseBdev2", 00:08:56.398 "uuid": "39fa3282-808f-4775-9ce9-d27eb7544fb2", 00:08:56.398 "is_configured": true, 00:08:56.398 "data_offset": 2048, 00:08:56.398 "data_size": 63488 00:08:56.398 }, 00:08:56.398 { 00:08:56.398 "name": "BaseBdev3", 00:08:56.398 "uuid": "0550c6d8-c2f6-4cb2-ab55-f61851d921b0", 00:08:56.398 "is_configured": true, 00:08:56.398 "data_offset": 2048, 00:08:56.398 "data_size": 63488 00:08:56.398 } 00:08:56.398 ] 00:08:56.398 }' 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.398 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.656 [2024-12-07 16:34:55.505849] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.656 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.914 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.914 "name": "Existed_Raid", 00:08:56.914 "uuid": "f99a98ae-37f1-4b16-951e-5b317e500bd5", 00:08:56.914 "strip_size_kb": 64, 00:08:56.914 "state": "configuring", 00:08:56.914 "raid_level": "concat", 00:08:56.914 "superblock": true, 00:08:56.914 "num_base_bdevs": 3, 00:08:56.914 "num_base_bdevs_discovered": 1, 00:08:56.914 "num_base_bdevs_operational": 3, 00:08:56.914 "base_bdevs_list": [ 00:08:56.914 { 00:08:56.914 "name": "BaseBdev1", 00:08:56.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.914 "is_configured": false, 00:08:56.914 "data_offset": 0, 00:08:56.914 "data_size": 0 00:08:56.914 }, 00:08:56.914 { 00:08:56.914 "name": null, 00:08:56.914 "uuid": "39fa3282-808f-4775-9ce9-d27eb7544fb2", 00:08:56.914 "is_configured": false, 00:08:56.914 "data_offset": 0, 00:08:56.914 "data_size": 63488 00:08:56.914 }, 00:08:56.914 { 00:08:56.914 "name": "BaseBdev3", 00:08:56.914 "uuid": "0550c6d8-c2f6-4cb2-ab55-f61851d921b0", 00:08:56.914 "is_configured": true, 00:08:56.914 "data_offset": 2048, 00:08:56.914 "data_size": 63488 00:08:56.914 } 00:08:56.914 ] 00:08:56.914 }' 00:08:56.914 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.914 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.173 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.173 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.173 16:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:57.173 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.173 16:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.173 [2024-12-07 16:34:56.034295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.173 BaseBdev1 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.173 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.173 [ 00:08:57.173 { 00:08:57.173 "name": "BaseBdev1", 00:08:57.173 "aliases": [ 00:08:57.173 "6bff436f-ea29-4cdb-bcff-1f413d08108b" 00:08:57.173 ], 00:08:57.173 "product_name": "Malloc disk", 00:08:57.173 "block_size": 512, 00:08:57.173 "num_blocks": 65536, 00:08:57.173 "uuid": "6bff436f-ea29-4cdb-bcff-1f413d08108b", 00:08:57.173 "assigned_rate_limits": { 00:08:57.173 "rw_ios_per_sec": 0, 00:08:57.173 "rw_mbytes_per_sec": 0, 00:08:57.173 "r_mbytes_per_sec": 0, 00:08:57.173 "w_mbytes_per_sec": 0 00:08:57.173 }, 00:08:57.173 "claimed": true, 00:08:57.173 "claim_type": "exclusive_write", 00:08:57.173 "zoned": false, 00:08:57.173 "supported_io_types": { 00:08:57.173 "read": true, 00:08:57.173 "write": true, 00:08:57.173 "unmap": true, 00:08:57.173 "flush": true, 00:08:57.173 "reset": true, 00:08:57.173 "nvme_admin": false, 00:08:57.173 "nvme_io": false, 00:08:57.173 "nvme_io_md": false, 00:08:57.173 "write_zeroes": true, 00:08:57.173 "zcopy": true, 00:08:57.173 "get_zone_info": false, 00:08:57.173 "zone_management": false, 00:08:57.173 "zone_append": false, 00:08:57.173 "compare": false, 00:08:57.173 "compare_and_write": false, 00:08:57.173 "abort": true, 00:08:57.173 "seek_hole": false, 00:08:57.173 "seek_data": false, 00:08:57.173 "copy": true, 00:08:57.173 "nvme_iov_md": false 00:08:57.173 }, 00:08:57.173 "memory_domains": [ 00:08:57.173 { 00:08:57.173 "dma_device_id": "system", 00:08:57.173 "dma_device_type": 1 00:08:57.173 }, 00:08:57.173 { 00:08:57.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.436 "dma_device_type": 2 00:08:57.436 } 00:08:57.436 ], 00:08:57.436 "driver_specific": {} 00:08:57.436 } 00:08:57.436 ] 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.436 "name": "Existed_Raid", 00:08:57.436 "uuid": "f99a98ae-37f1-4b16-951e-5b317e500bd5", 00:08:57.436 "strip_size_kb": 64, 00:08:57.436 "state": "configuring", 00:08:57.436 "raid_level": "concat", 00:08:57.436 "superblock": true, 00:08:57.436 "num_base_bdevs": 3, 00:08:57.436 "num_base_bdevs_discovered": 2, 00:08:57.436 "num_base_bdevs_operational": 3, 00:08:57.436 "base_bdevs_list": [ 00:08:57.436 { 00:08:57.436 "name": "BaseBdev1", 00:08:57.436 "uuid": "6bff436f-ea29-4cdb-bcff-1f413d08108b", 00:08:57.436 "is_configured": true, 00:08:57.436 "data_offset": 2048, 00:08:57.436 "data_size": 63488 00:08:57.436 }, 00:08:57.436 { 00:08:57.436 "name": null, 00:08:57.436 "uuid": "39fa3282-808f-4775-9ce9-d27eb7544fb2", 00:08:57.436 "is_configured": false, 00:08:57.436 "data_offset": 0, 00:08:57.436 "data_size": 63488 00:08:57.436 }, 00:08:57.436 { 00:08:57.436 "name": "BaseBdev3", 00:08:57.436 "uuid": "0550c6d8-c2f6-4cb2-ab55-f61851d921b0", 00:08:57.436 "is_configured": true, 00:08:57.436 "data_offset": 2048, 00:08:57.436 "data_size": 63488 00:08:57.436 } 00:08:57.436 ] 00:08:57.436 }' 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.436 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.696 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.696 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.696 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:57.696 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.696 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.696 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:57.696 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:57.696 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.696 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.696 [2024-12-07 16:34:56.585450] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.954 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.954 "name": "Existed_Raid", 00:08:57.954 "uuid": "f99a98ae-37f1-4b16-951e-5b317e500bd5", 00:08:57.954 "strip_size_kb": 64, 00:08:57.954 "state": "configuring", 00:08:57.954 "raid_level": "concat", 00:08:57.954 "superblock": true, 00:08:57.954 "num_base_bdevs": 3, 00:08:57.954 "num_base_bdevs_discovered": 1, 00:08:57.954 "num_base_bdevs_operational": 3, 00:08:57.954 "base_bdevs_list": [ 00:08:57.954 { 00:08:57.954 "name": "BaseBdev1", 00:08:57.954 "uuid": "6bff436f-ea29-4cdb-bcff-1f413d08108b", 00:08:57.954 "is_configured": true, 00:08:57.954 "data_offset": 2048, 00:08:57.954 "data_size": 63488 00:08:57.954 }, 00:08:57.954 { 00:08:57.954 "name": null, 00:08:57.954 "uuid": "39fa3282-808f-4775-9ce9-d27eb7544fb2", 00:08:57.954 "is_configured": false, 00:08:57.955 "data_offset": 0, 00:08:57.955 "data_size": 63488 00:08:57.955 }, 00:08:57.955 { 00:08:57.955 "name": null, 00:08:57.955 "uuid": "0550c6d8-c2f6-4cb2-ab55-f61851d921b0", 00:08:57.955 "is_configured": false, 00:08:57.955 "data_offset": 0, 00:08:57.955 "data_size": 63488 00:08:57.955 } 00:08:57.955 ] 00:08:57.955 }' 00:08:57.955 16:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.955 16:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.215 [2024-12-07 16:34:57.068680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.215 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.475 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.475 "name": "Existed_Raid", 00:08:58.475 "uuid": "f99a98ae-37f1-4b16-951e-5b317e500bd5", 00:08:58.475 "strip_size_kb": 64, 00:08:58.475 "state": "configuring", 00:08:58.475 "raid_level": "concat", 00:08:58.475 "superblock": true, 00:08:58.475 "num_base_bdevs": 3, 00:08:58.475 "num_base_bdevs_discovered": 2, 00:08:58.475 "num_base_bdevs_operational": 3, 00:08:58.475 "base_bdevs_list": [ 00:08:58.475 { 00:08:58.475 "name": "BaseBdev1", 00:08:58.475 "uuid": "6bff436f-ea29-4cdb-bcff-1f413d08108b", 00:08:58.475 "is_configured": true, 00:08:58.475 "data_offset": 2048, 00:08:58.475 "data_size": 63488 00:08:58.475 }, 00:08:58.475 { 00:08:58.475 "name": null, 00:08:58.475 "uuid": "39fa3282-808f-4775-9ce9-d27eb7544fb2", 00:08:58.475 "is_configured": false, 00:08:58.475 "data_offset": 0, 00:08:58.475 "data_size": 63488 00:08:58.475 }, 00:08:58.475 { 00:08:58.475 "name": "BaseBdev3", 00:08:58.475 "uuid": "0550c6d8-c2f6-4cb2-ab55-f61851d921b0", 00:08:58.475 "is_configured": true, 00:08:58.475 "data_offset": 2048, 00:08:58.475 "data_size": 63488 00:08:58.475 } 00:08:58.475 ] 00:08:58.475 }' 00:08:58.475 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.475 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.734 [2024-12-07 16:34:57.563848] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.734 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.994 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.994 "name": "Existed_Raid", 00:08:58.994 "uuid": "f99a98ae-37f1-4b16-951e-5b317e500bd5", 00:08:58.994 "strip_size_kb": 64, 00:08:58.994 "state": "configuring", 00:08:58.994 "raid_level": "concat", 00:08:58.994 "superblock": true, 00:08:58.994 "num_base_bdevs": 3, 00:08:58.994 "num_base_bdevs_discovered": 1, 00:08:58.994 "num_base_bdevs_operational": 3, 00:08:58.994 "base_bdevs_list": [ 00:08:58.994 { 00:08:58.994 "name": null, 00:08:58.994 "uuid": "6bff436f-ea29-4cdb-bcff-1f413d08108b", 00:08:58.994 "is_configured": false, 00:08:58.994 "data_offset": 0, 00:08:58.994 "data_size": 63488 00:08:58.994 }, 00:08:58.994 { 00:08:58.994 "name": null, 00:08:58.994 "uuid": "39fa3282-808f-4775-9ce9-d27eb7544fb2", 00:08:58.994 "is_configured": false, 00:08:58.994 "data_offset": 0, 00:08:58.994 "data_size": 63488 00:08:58.994 }, 00:08:58.994 { 00:08:58.994 "name": "BaseBdev3", 00:08:58.994 "uuid": "0550c6d8-c2f6-4cb2-ab55-f61851d921b0", 00:08:58.994 "is_configured": true, 00:08:58.994 "data_offset": 2048, 00:08:58.994 "data_size": 63488 00:08:58.994 } 00:08:58.994 ] 00:08:58.994 }' 00:08:58.994 16:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.994 16:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.254 [2024-12-07 16:34:58.082910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.254 "name": "Existed_Raid", 00:08:59.254 "uuid": "f99a98ae-37f1-4b16-951e-5b317e500bd5", 00:08:59.254 "strip_size_kb": 64, 00:08:59.254 "state": "configuring", 00:08:59.254 "raid_level": "concat", 00:08:59.254 "superblock": true, 00:08:59.254 "num_base_bdevs": 3, 00:08:59.254 "num_base_bdevs_discovered": 2, 00:08:59.254 "num_base_bdevs_operational": 3, 00:08:59.254 "base_bdevs_list": [ 00:08:59.254 { 00:08:59.254 "name": null, 00:08:59.254 "uuid": "6bff436f-ea29-4cdb-bcff-1f413d08108b", 00:08:59.254 "is_configured": false, 00:08:59.254 "data_offset": 0, 00:08:59.254 "data_size": 63488 00:08:59.254 }, 00:08:59.254 { 00:08:59.254 "name": "BaseBdev2", 00:08:59.254 "uuid": "39fa3282-808f-4775-9ce9-d27eb7544fb2", 00:08:59.254 "is_configured": true, 00:08:59.254 "data_offset": 2048, 00:08:59.254 "data_size": 63488 00:08:59.254 }, 00:08:59.254 { 00:08:59.254 "name": "BaseBdev3", 00:08:59.254 "uuid": "0550c6d8-c2f6-4cb2-ab55-f61851d921b0", 00:08:59.254 "is_configured": true, 00:08:59.254 "data_offset": 2048, 00:08:59.254 "data_size": 63488 00:08:59.254 } 00:08:59.254 ] 00:08:59.254 }' 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.254 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6bff436f-ea29-4cdb-bcff-1f413d08108b 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.824 [2024-12-07 16:34:58.651277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:59.824 [2024-12-07 16:34:58.651570] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:59.824 [2024-12-07 16:34:58.651624] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.824 [2024-12-07 16:34:58.651926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:59.824 [2024-12-07 16:34:58.652085] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:59.824 NewBaseBdev 00:08:59.824 [2024-12-07 16:34:58.652130] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:59.824 [2024-12-07 16:34:58.652282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.824 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.824 [ 00:08:59.824 { 00:08:59.825 "name": "NewBaseBdev", 00:08:59.825 "aliases": [ 00:08:59.825 "6bff436f-ea29-4cdb-bcff-1f413d08108b" 00:08:59.825 ], 00:08:59.825 "product_name": "Malloc disk", 00:08:59.825 "block_size": 512, 00:08:59.825 "num_blocks": 65536, 00:08:59.825 "uuid": "6bff436f-ea29-4cdb-bcff-1f413d08108b", 00:08:59.825 "assigned_rate_limits": { 00:08:59.825 "rw_ios_per_sec": 0, 00:08:59.825 "rw_mbytes_per_sec": 0, 00:08:59.825 "r_mbytes_per_sec": 0, 00:08:59.825 "w_mbytes_per_sec": 0 00:08:59.825 }, 00:08:59.825 "claimed": true, 00:08:59.825 "claim_type": "exclusive_write", 00:08:59.825 "zoned": false, 00:08:59.825 "supported_io_types": { 00:08:59.825 "read": true, 00:08:59.825 "write": true, 00:08:59.825 "unmap": true, 00:08:59.825 "flush": true, 00:08:59.825 "reset": true, 00:08:59.825 "nvme_admin": false, 00:08:59.825 "nvme_io": false, 00:08:59.825 "nvme_io_md": false, 00:08:59.825 "write_zeroes": true, 00:08:59.825 "zcopy": true, 00:08:59.825 "get_zone_info": false, 00:08:59.825 "zone_management": false, 00:08:59.825 "zone_append": false, 00:08:59.825 "compare": false, 00:08:59.825 "compare_and_write": false, 00:08:59.825 "abort": true, 00:08:59.825 "seek_hole": false, 00:08:59.825 "seek_data": false, 00:08:59.825 "copy": true, 00:08:59.825 "nvme_iov_md": false 00:08:59.825 }, 00:08:59.825 "memory_domains": [ 00:08:59.825 { 00:08:59.825 "dma_device_id": "system", 00:08:59.825 "dma_device_type": 1 00:08:59.825 }, 00:08:59.825 { 00:08:59.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.825 "dma_device_type": 2 00:08:59.825 } 00:08:59.825 ], 00:08:59.825 "driver_specific": {} 00:08:59.825 } 00:08:59.825 ] 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.825 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.084 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.084 "name": "Existed_Raid", 00:09:00.084 "uuid": "f99a98ae-37f1-4b16-951e-5b317e500bd5", 00:09:00.084 "strip_size_kb": 64, 00:09:00.084 "state": "online", 00:09:00.084 "raid_level": "concat", 00:09:00.084 "superblock": true, 00:09:00.084 "num_base_bdevs": 3, 00:09:00.084 "num_base_bdevs_discovered": 3, 00:09:00.084 "num_base_bdevs_operational": 3, 00:09:00.084 "base_bdevs_list": [ 00:09:00.084 { 00:09:00.084 "name": "NewBaseBdev", 00:09:00.084 "uuid": "6bff436f-ea29-4cdb-bcff-1f413d08108b", 00:09:00.084 "is_configured": true, 00:09:00.084 "data_offset": 2048, 00:09:00.084 "data_size": 63488 00:09:00.084 }, 00:09:00.084 { 00:09:00.084 "name": "BaseBdev2", 00:09:00.084 "uuid": "39fa3282-808f-4775-9ce9-d27eb7544fb2", 00:09:00.084 "is_configured": true, 00:09:00.084 "data_offset": 2048, 00:09:00.084 "data_size": 63488 00:09:00.084 }, 00:09:00.084 { 00:09:00.084 "name": "BaseBdev3", 00:09:00.084 "uuid": "0550c6d8-c2f6-4cb2-ab55-f61851d921b0", 00:09:00.084 "is_configured": true, 00:09:00.084 "data_offset": 2048, 00:09:00.084 "data_size": 63488 00:09:00.084 } 00:09:00.084 ] 00:09:00.084 }' 00:09:00.084 16:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.084 16:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.342 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.342 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.342 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.342 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.342 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.342 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.342 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.343 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.343 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.343 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.343 [2024-12-07 16:34:59.158782] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.343 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.343 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.343 "name": "Existed_Raid", 00:09:00.343 "aliases": [ 00:09:00.343 "f99a98ae-37f1-4b16-951e-5b317e500bd5" 00:09:00.343 ], 00:09:00.343 "product_name": "Raid Volume", 00:09:00.343 "block_size": 512, 00:09:00.343 "num_blocks": 190464, 00:09:00.343 "uuid": "f99a98ae-37f1-4b16-951e-5b317e500bd5", 00:09:00.343 "assigned_rate_limits": { 00:09:00.343 "rw_ios_per_sec": 0, 00:09:00.343 "rw_mbytes_per_sec": 0, 00:09:00.343 "r_mbytes_per_sec": 0, 00:09:00.343 "w_mbytes_per_sec": 0 00:09:00.343 }, 00:09:00.343 "claimed": false, 00:09:00.343 "zoned": false, 00:09:00.343 "supported_io_types": { 00:09:00.343 "read": true, 00:09:00.343 "write": true, 00:09:00.343 "unmap": true, 00:09:00.343 "flush": true, 00:09:00.343 "reset": true, 00:09:00.343 "nvme_admin": false, 00:09:00.343 "nvme_io": false, 00:09:00.343 "nvme_io_md": false, 00:09:00.343 "write_zeroes": true, 00:09:00.343 "zcopy": false, 00:09:00.343 "get_zone_info": false, 00:09:00.343 "zone_management": false, 00:09:00.343 "zone_append": false, 00:09:00.343 "compare": false, 00:09:00.343 "compare_and_write": false, 00:09:00.343 "abort": false, 00:09:00.343 "seek_hole": false, 00:09:00.343 "seek_data": false, 00:09:00.343 "copy": false, 00:09:00.343 "nvme_iov_md": false 00:09:00.343 }, 00:09:00.343 "memory_domains": [ 00:09:00.343 { 00:09:00.343 "dma_device_id": "system", 00:09:00.343 "dma_device_type": 1 00:09:00.343 }, 00:09:00.343 { 00:09:00.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.343 "dma_device_type": 2 00:09:00.343 }, 00:09:00.343 { 00:09:00.343 "dma_device_id": "system", 00:09:00.343 "dma_device_type": 1 00:09:00.343 }, 00:09:00.343 { 00:09:00.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.343 "dma_device_type": 2 00:09:00.343 }, 00:09:00.343 { 00:09:00.343 "dma_device_id": "system", 00:09:00.343 "dma_device_type": 1 00:09:00.343 }, 00:09:00.343 { 00:09:00.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.343 "dma_device_type": 2 00:09:00.343 } 00:09:00.343 ], 00:09:00.343 "driver_specific": { 00:09:00.343 "raid": { 00:09:00.343 "uuid": "f99a98ae-37f1-4b16-951e-5b317e500bd5", 00:09:00.343 "strip_size_kb": 64, 00:09:00.343 "state": "online", 00:09:00.343 "raid_level": "concat", 00:09:00.343 "superblock": true, 00:09:00.343 "num_base_bdevs": 3, 00:09:00.343 "num_base_bdevs_discovered": 3, 00:09:00.343 "num_base_bdevs_operational": 3, 00:09:00.343 "base_bdevs_list": [ 00:09:00.343 { 00:09:00.343 "name": "NewBaseBdev", 00:09:00.343 "uuid": "6bff436f-ea29-4cdb-bcff-1f413d08108b", 00:09:00.343 "is_configured": true, 00:09:00.343 "data_offset": 2048, 00:09:00.343 "data_size": 63488 00:09:00.343 }, 00:09:00.343 { 00:09:00.343 "name": "BaseBdev2", 00:09:00.343 "uuid": "39fa3282-808f-4775-9ce9-d27eb7544fb2", 00:09:00.343 "is_configured": true, 00:09:00.343 "data_offset": 2048, 00:09:00.343 "data_size": 63488 00:09:00.343 }, 00:09:00.343 { 00:09:00.343 "name": "BaseBdev3", 00:09:00.343 "uuid": "0550c6d8-c2f6-4cb2-ab55-f61851d921b0", 00:09:00.343 "is_configured": true, 00:09:00.343 "data_offset": 2048, 00:09:00.343 "data_size": 63488 00:09:00.343 } 00:09:00.343 ] 00:09:00.343 } 00:09:00.343 } 00:09:00.343 }' 00:09:00.343 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:00.602 BaseBdev2 00:09:00.602 BaseBdev3' 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.602 [2024-12-07 16:34:59.457890] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.602 [2024-12-07 16:34:59.457961] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.602 [2024-12-07 16:34:59.458063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.602 [2024-12-07 16:34:59.458144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.602 [2024-12-07 16:34:59.458200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77618 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77618 ']' 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77618 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:00.602 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77618 00:09:00.861 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:00.861 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:00.861 killing process with pid 77618 00:09:00.861 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77618' 00:09:00.861 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77618 00:09:00.861 [2024-12-07 16:34:59.510018] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.861 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77618 00:09:00.861 [2024-12-07 16:34:59.569691] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.120 16:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:01.120 00:09:01.120 real 0m9.320s 00:09:01.120 user 0m15.618s 00:09:01.120 sys 0m1.975s 00:09:01.120 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.120 16:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.120 ************************************ 00:09:01.120 END TEST raid_state_function_test_sb 00:09:01.120 ************************************ 00:09:01.120 16:35:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:01.120 16:35:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:01.120 16:35:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.120 16:35:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.379 ************************************ 00:09:01.379 START TEST raid_superblock_test 00:09:01.379 ************************************ 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78227 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78227 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 78227 ']' 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.379 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.379 [2024-12-07 16:35:00.114777] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:01.379 [2024-12-07 16:35:00.115033] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78227 ] 00:09:01.638 [2024-12-07 16:35:00.280901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.638 [2024-12-07 16:35:00.354795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.638 [2024-12-07 16:35:00.431717] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.638 [2024-12-07 16:35:00.431764] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.204 malloc1 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.204 [2024-12-07 16:35:00.982817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:02.204 [2024-12-07 16:35:00.982949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.204 [2024-12-07 16:35:00.982989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:02.204 [2024-12-07 16:35:00.983027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.204 [2024-12-07 16:35:00.985479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.204 [2024-12-07 16:35:00.985551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:02.204 pt1 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:02.204 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.205 16:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.205 malloc2 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.205 [2024-12-07 16:35:01.031742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:02.205 [2024-12-07 16:35:01.031845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.205 [2024-12-07 16:35:01.031881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:02.205 [2024-12-07 16:35:01.031920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.205 [2024-12-07 16:35:01.034540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.205 [2024-12-07 16:35:01.034614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:02.205 pt2 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.205 malloc3 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.205 [2024-12-07 16:35:01.070376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:02.205 [2024-12-07 16:35:01.070459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.205 [2024-12-07 16:35:01.070490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:02.205 [2024-12-07 16:35:01.070519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.205 [2024-12-07 16:35:01.072863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.205 [2024-12-07 16:35:01.072929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:02.205 pt3 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.205 [2024-12-07 16:35:01.082409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:02.205 [2024-12-07 16:35:01.084499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:02.205 [2024-12-07 16:35:01.084597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:02.205 [2024-12-07 16:35:01.084763] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:02.205 [2024-12-07 16:35:01.084816] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:02.205 [2024-12-07 16:35:01.085122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:02.205 [2024-12-07 16:35:01.085287] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:02.205 [2024-12-07 16:35:01.085333] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:02.205 [2024-12-07 16:35:01.085502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.205 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.462 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.463 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.463 "name": "raid_bdev1", 00:09:02.463 "uuid": "c340c4c1-48ed-48cd-ae35-58907ffddf3e", 00:09:02.463 "strip_size_kb": 64, 00:09:02.463 "state": "online", 00:09:02.463 "raid_level": "concat", 00:09:02.463 "superblock": true, 00:09:02.463 "num_base_bdevs": 3, 00:09:02.463 "num_base_bdevs_discovered": 3, 00:09:02.463 "num_base_bdevs_operational": 3, 00:09:02.463 "base_bdevs_list": [ 00:09:02.463 { 00:09:02.463 "name": "pt1", 00:09:02.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:02.463 "is_configured": true, 00:09:02.463 "data_offset": 2048, 00:09:02.463 "data_size": 63488 00:09:02.463 }, 00:09:02.463 { 00:09:02.463 "name": "pt2", 00:09:02.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.463 "is_configured": true, 00:09:02.463 "data_offset": 2048, 00:09:02.463 "data_size": 63488 00:09:02.463 }, 00:09:02.463 { 00:09:02.463 "name": "pt3", 00:09:02.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:02.463 "is_configured": true, 00:09:02.463 "data_offset": 2048, 00:09:02.463 "data_size": 63488 00:09:02.463 } 00:09:02.463 ] 00:09:02.463 }' 00:09:02.463 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.463 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.720 [2024-12-07 16:35:01.537915] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.720 "name": "raid_bdev1", 00:09:02.720 "aliases": [ 00:09:02.720 "c340c4c1-48ed-48cd-ae35-58907ffddf3e" 00:09:02.720 ], 00:09:02.720 "product_name": "Raid Volume", 00:09:02.720 "block_size": 512, 00:09:02.720 "num_blocks": 190464, 00:09:02.720 "uuid": "c340c4c1-48ed-48cd-ae35-58907ffddf3e", 00:09:02.720 "assigned_rate_limits": { 00:09:02.720 "rw_ios_per_sec": 0, 00:09:02.720 "rw_mbytes_per_sec": 0, 00:09:02.720 "r_mbytes_per_sec": 0, 00:09:02.720 "w_mbytes_per_sec": 0 00:09:02.720 }, 00:09:02.720 "claimed": false, 00:09:02.720 "zoned": false, 00:09:02.720 "supported_io_types": { 00:09:02.720 "read": true, 00:09:02.720 "write": true, 00:09:02.720 "unmap": true, 00:09:02.720 "flush": true, 00:09:02.720 "reset": true, 00:09:02.720 "nvme_admin": false, 00:09:02.720 "nvme_io": false, 00:09:02.720 "nvme_io_md": false, 00:09:02.720 "write_zeroes": true, 00:09:02.720 "zcopy": false, 00:09:02.720 "get_zone_info": false, 00:09:02.720 "zone_management": false, 00:09:02.720 "zone_append": false, 00:09:02.720 "compare": false, 00:09:02.720 "compare_and_write": false, 00:09:02.720 "abort": false, 00:09:02.720 "seek_hole": false, 00:09:02.720 "seek_data": false, 00:09:02.720 "copy": false, 00:09:02.720 "nvme_iov_md": false 00:09:02.720 }, 00:09:02.720 "memory_domains": [ 00:09:02.720 { 00:09:02.720 "dma_device_id": "system", 00:09:02.720 "dma_device_type": 1 00:09:02.720 }, 00:09:02.720 { 00:09:02.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.720 "dma_device_type": 2 00:09:02.720 }, 00:09:02.720 { 00:09:02.720 "dma_device_id": "system", 00:09:02.720 "dma_device_type": 1 00:09:02.720 }, 00:09:02.720 { 00:09:02.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.720 "dma_device_type": 2 00:09:02.720 }, 00:09:02.720 { 00:09:02.720 "dma_device_id": "system", 00:09:02.720 "dma_device_type": 1 00:09:02.720 }, 00:09:02.720 { 00:09:02.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.720 "dma_device_type": 2 00:09:02.720 } 00:09:02.720 ], 00:09:02.720 "driver_specific": { 00:09:02.720 "raid": { 00:09:02.720 "uuid": "c340c4c1-48ed-48cd-ae35-58907ffddf3e", 00:09:02.720 "strip_size_kb": 64, 00:09:02.720 "state": "online", 00:09:02.720 "raid_level": "concat", 00:09:02.720 "superblock": true, 00:09:02.720 "num_base_bdevs": 3, 00:09:02.720 "num_base_bdevs_discovered": 3, 00:09:02.720 "num_base_bdevs_operational": 3, 00:09:02.720 "base_bdevs_list": [ 00:09:02.720 { 00:09:02.720 "name": "pt1", 00:09:02.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:02.720 "is_configured": true, 00:09:02.720 "data_offset": 2048, 00:09:02.720 "data_size": 63488 00:09:02.720 }, 00:09:02.720 { 00:09:02.720 "name": "pt2", 00:09:02.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.720 "is_configured": true, 00:09:02.720 "data_offset": 2048, 00:09:02.720 "data_size": 63488 00:09:02.720 }, 00:09:02.720 { 00:09:02.720 "name": "pt3", 00:09:02.720 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:02.720 "is_configured": true, 00:09:02.720 "data_offset": 2048, 00:09:02.720 "data_size": 63488 00:09:02.720 } 00:09:02.720 ] 00:09:02.720 } 00:09:02.720 } 00:09:02.720 }' 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.720 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:02.720 pt2 00:09:02.720 pt3' 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:02.978 [2024-12-07 16:35:01.785372] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c340c4c1-48ed-48cd-ae35-58907ffddf3e 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c340c4c1-48ed-48cd-ae35-58907ffddf3e ']' 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.978 [2024-12-07 16:35:01.813068] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:02.978 [2024-12-07 16:35:01.813133] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.978 [2024-12-07 16:35:01.813245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.978 [2024-12-07 16:35:01.813325] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.978 [2024-12-07 16:35:01.813446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.978 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.237 [2024-12-07 16:35:01.960826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:03.237 [2024-12-07 16:35:01.962908] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:03.237 [2024-12-07 16:35:01.963006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:03.237 [2024-12-07 16:35:01.963076] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:03.237 [2024-12-07 16:35:01.963167] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:03.237 [2024-12-07 16:35:01.963248] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:03.237 [2024-12-07 16:35:01.963319] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.237 [2024-12-07 16:35:01.963354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:03.237 request: 00:09:03.237 { 00:09:03.237 "name": "raid_bdev1", 00:09:03.237 "raid_level": "concat", 00:09:03.237 "base_bdevs": [ 00:09:03.237 "malloc1", 00:09:03.237 "malloc2", 00:09:03.237 "malloc3" 00:09:03.237 ], 00:09:03.237 "strip_size_kb": 64, 00:09:03.237 "superblock": false, 00:09:03.237 "method": "bdev_raid_create", 00:09:03.237 "req_id": 1 00:09:03.237 } 00:09:03.237 Got JSON-RPC error response 00:09:03.237 response: 00:09:03.237 { 00:09:03.237 "code": -17, 00:09:03.237 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:03.237 } 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.237 16:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.237 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:03.237 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:03.237 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:03.237 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.237 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.237 [2024-12-07 16:35:02.028681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:03.237 [2024-12-07 16:35:02.028757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.237 [2024-12-07 16:35:02.028788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:03.237 [2024-12-07 16:35:02.028826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.237 [2024-12-07 16:35:02.031250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.237 [2024-12-07 16:35:02.031317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:03.237 [2024-12-07 16:35:02.031418] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:03.237 [2024-12-07 16:35:02.031480] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:03.237 pt1 00:09:03.237 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.237 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:03.237 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.237 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.237 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.237 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.238 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.238 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.238 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.238 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.238 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.238 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.238 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.238 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.238 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.238 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.238 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.238 "name": "raid_bdev1", 00:09:03.238 "uuid": "c340c4c1-48ed-48cd-ae35-58907ffddf3e", 00:09:03.238 "strip_size_kb": 64, 00:09:03.238 "state": "configuring", 00:09:03.238 "raid_level": "concat", 00:09:03.238 "superblock": true, 00:09:03.238 "num_base_bdevs": 3, 00:09:03.238 "num_base_bdevs_discovered": 1, 00:09:03.238 "num_base_bdevs_operational": 3, 00:09:03.238 "base_bdevs_list": [ 00:09:03.238 { 00:09:03.238 "name": "pt1", 00:09:03.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:03.238 "is_configured": true, 00:09:03.238 "data_offset": 2048, 00:09:03.238 "data_size": 63488 00:09:03.238 }, 00:09:03.238 { 00:09:03.238 "name": null, 00:09:03.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.238 "is_configured": false, 00:09:03.238 "data_offset": 2048, 00:09:03.238 "data_size": 63488 00:09:03.238 }, 00:09:03.238 { 00:09:03.238 "name": null, 00:09:03.238 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:03.238 "is_configured": false, 00:09:03.238 "data_offset": 2048, 00:09:03.238 "data_size": 63488 00:09:03.238 } 00:09:03.238 ] 00:09:03.238 }' 00:09:03.238 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.238 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.802 [2024-12-07 16:35:02.475978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:03.802 [2024-12-07 16:35:02.476102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.802 [2024-12-07 16:35:02.476143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:03.802 [2024-12-07 16:35:02.476176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.802 [2024-12-07 16:35:02.476694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.802 [2024-12-07 16:35:02.476751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:03.802 [2024-12-07 16:35:02.476863] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:03.802 [2024-12-07 16:35:02.476916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:03.802 pt2 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.802 [2024-12-07 16:35:02.487942] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.802 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.802 "name": "raid_bdev1", 00:09:03.802 "uuid": "c340c4c1-48ed-48cd-ae35-58907ffddf3e", 00:09:03.802 "strip_size_kb": 64, 00:09:03.802 "state": "configuring", 00:09:03.803 "raid_level": "concat", 00:09:03.803 "superblock": true, 00:09:03.803 "num_base_bdevs": 3, 00:09:03.803 "num_base_bdevs_discovered": 1, 00:09:03.803 "num_base_bdevs_operational": 3, 00:09:03.803 "base_bdevs_list": [ 00:09:03.803 { 00:09:03.803 "name": "pt1", 00:09:03.803 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:03.803 "is_configured": true, 00:09:03.803 "data_offset": 2048, 00:09:03.803 "data_size": 63488 00:09:03.803 }, 00:09:03.803 { 00:09:03.803 "name": null, 00:09:03.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.803 "is_configured": false, 00:09:03.803 "data_offset": 0, 00:09:03.803 "data_size": 63488 00:09:03.803 }, 00:09:03.803 { 00:09:03.803 "name": null, 00:09:03.803 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:03.803 "is_configured": false, 00:09:03.803 "data_offset": 2048, 00:09:03.803 "data_size": 63488 00:09:03.803 } 00:09:03.803 ] 00:09:03.803 }' 00:09:03.803 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.803 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.060 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:04.060 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:04.060 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:04.060 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.060 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.320 [2024-12-07 16:35:02.963099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:04.320 [2024-12-07 16:35:02.963201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.320 [2024-12-07 16:35:02.963237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:04.320 [2024-12-07 16:35:02.963266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.320 [2024-12-07 16:35:02.963758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.320 [2024-12-07 16:35:02.963817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:04.320 [2024-12-07 16:35:02.963925] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:04.320 [2024-12-07 16:35:02.963974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:04.320 pt2 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.320 [2024-12-07 16:35:02.975060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:04.320 [2024-12-07 16:35:02.975134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.320 [2024-12-07 16:35:02.975169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:04.320 [2024-12-07 16:35:02.975191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.320 [2024-12-07 16:35:02.975582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.320 [2024-12-07 16:35:02.975637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:04.320 [2024-12-07 16:35:02.975724] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:04.320 [2024-12-07 16:35:02.975769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:04.320 [2024-12-07 16:35:02.975884] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:04.320 [2024-12-07 16:35:02.975919] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:04.320 [2024-12-07 16:35:02.976186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:04.320 [2024-12-07 16:35:02.976323] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:04.320 [2024-12-07 16:35:02.976372] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:04.320 [2024-12-07 16:35:02.976510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.320 pt3 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.320 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.321 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.321 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.321 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.321 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.321 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.321 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.321 16:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.321 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.321 16:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.321 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.321 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.321 "name": "raid_bdev1", 00:09:04.321 "uuid": "c340c4c1-48ed-48cd-ae35-58907ffddf3e", 00:09:04.321 "strip_size_kb": 64, 00:09:04.321 "state": "online", 00:09:04.321 "raid_level": "concat", 00:09:04.321 "superblock": true, 00:09:04.321 "num_base_bdevs": 3, 00:09:04.321 "num_base_bdevs_discovered": 3, 00:09:04.321 "num_base_bdevs_operational": 3, 00:09:04.321 "base_bdevs_list": [ 00:09:04.321 { 00:09:04.321 "name": "pt1", 00:09:04.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.321 "is_configured": true, 00:09:04.321 "data_offset": 2048, 00:09:04.321 "data_size": 63488 00:09:04.321 }, 00:09:04.321 { 00:09:04.321 "name": "pt2", 00:09:04.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.321 "is_configured": true, 00:09:04.321 "data_offset": 2048, 00:09:04.321 "data_size": 63488 00:09:04.321 }, 00:09:04.321 { 00:09:04.321 "name": "pt3", 00:09:04.321 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.321 "is_configured": true, 00:09:04.321 "data_offset": 2048, 00:09:04.321 "data_size": 63488 00:09:04.321 } 00:09:04.321 ] 00:09:04.321 }' 00:09:04.321 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.321 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.581 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:04.581 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:04.581 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:04.581 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:04.581 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:04.581 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:04.581 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.581 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:04.581 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.581 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.581 [2024-12-07 16:35:03.430710] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.581 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.581 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:04.581 "name": "raid_bdev1", 00:09:04.581 "aliases": [ 00:09:04.581 "c340c4c1-48ed-48cd-ae35-58907ffddf3e" 00:09:04.581 ], 00:09:04.581 "product_name": "Raid Volume", 00:09:04.581 "block_size": 512, 00:09:04.581 "num_blocks": 190464, 00:09:04.581 "uuid": "c340c4c1-48ed-48cd-ae35-58907ffddf3e", 00:09:04.581 "assigned_rate_limits": { 00:09:04.581 "rw_ios_per_sec": 0, 00:09:04.581 "rw_mbytes_per_sec": 0, 00:09:04.581 "r_mbytes_per_sec": 0, 00:09:04.581 "w_mbytes_per_sec": 0 00:09:04.581 }, 00:09:04.581 "claimed": false, 00:09:04.581 "zoned": false, 00:09:04.581 "supported_io_types": { 00:09:04.581 "read": true, 00:09:04.581 "write": true, 00:09:04.581 "unmap": true, 00:09:04.581 "flush": true, 00:09:04.581 "reset": true, 00:09:04.581 "nvme_admin": false, 00:09:04.581 "nvme_io": false, 00:09:04.581 "nvme_io_md": false, 00:09:04.581 "write_zeroes": true, 00:09:04.581 "zcopy": false, 00:09:04.581 "get_zone_info": false, 00:09:04.581 "zone_management": false, 00:09:04.581 "zone_append": false, 00:09:04.581 "compare": false, 00:09:04.581 "compare_and_write": false, 00:09:04.581 "abort": false, 00:09:04.581 "seek_hole": false, 00:09:04.581 "seek_data": false, 00:09:04.581 "copy": false, 00:09:04.581 "nvme_iov_md": false 00:09:04.581 }, 00:09:04.581 "memory_domains": [ 00:09:04.581 { 00:09:04.581 "dma_device_id": "system", 00:09:04.581 "dma_device_type": 1 00:09:04.581 }, 00:09:04.581 { 00:09:04.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.581 "dma_device_type": 2 00:09:04.581 }, 00:09:04.581 { 00:09:04.581 "dma_device_id": "system", 00:09:04.581 "dma_device_type": 1 00:09:04.581 }, 00:09:04.581 { 00:09:04.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.581 "dma_device_type": 2 00:09:04.581 }, 00:09:04.581 { 00:09:04.581 "dma_device_id": "system", 00:09:04.581 "dma_device_type": 1 00:09:04.581 }, 00:09:04.581 { 00:09:04.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.581 "dma_device_type": 2 00:09:04.581 } 00:09:04.581 ], 00:09:04.581 "driver_specific": { 00:09:04.581 "raid": { 00:09:04.581 "uuid": "c340c4c1-48ed-48cd-ae35-58907ffddf3e", 00:09:04.581 "strip_size_kb": 64, 00:09:04.581 "state": "online", 00:09:04.581 "raid_level": "concat", 00:09:04.581 "superblock": true, 00:09:04.581 "num_base_bdevs": 3, 00:09:04.581 "num_base_bdevs_discovered": 3, 00:09:04.581 "num_base_bdevs_operational": 3, 00:09:04.581 "base_bdevs_list": [ 00:09:04.581 { 00:09:04.581 "name": "pt1", 00:09:04.581 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.581 "is_configured": true, 00:09:04.581 "data_offset": 2048, 00:09:04.581 "data_size": 63488 00:09:04.582 }, 00:09:04.582 { 00:09:04.582 "name": "pt2", 00:09:04.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.582 "is_configured": true, 00:09:04.582 "data_offset": 2048, 00:09:04.582 "data_size": 63488 00:09:04.582 }, 00:09:04.582 { 00:09:04.582 "name": "pt3", 00:09:04.582 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.582 "is_configured": true, 00:09:04.582 "data_offset": 2048, 00:09:04.582 "data_size": 63488 00:09:04.582 } 00:09:04.582 ] 00:09:04.582 } 00:09:04.582 } 00:09:04.582 }' 00:09:04.582 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:04.841 pt2 00:09:04.841 pt3' 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:04.841 [2024-12-07 16:35:03.686134] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c340c4c1-48ed-48cd-ae35-58907ffddf3e '!=' c340c4c1-48ed-48cd-ae35-58907ffddf3e ']' 00:09:04.841 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:04.842 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:04.842 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:04.842 16:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78227 00:09:04.842 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 78227 ']' 00:09:04.842 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 78227 00:09:04.842 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:05.102 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.102 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78227 00:09:05.102 killing process with pid 78227 00:09:05.102 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.102 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.102 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78227' 00:09:05.102 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 78227 00:09:05.102 [2024-12-07 16:35:03.762551] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.102 [2024-12-07 16:35:03.762646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.102 16:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 78227 00:09:05.102 [2024-12-07 16:35:03.762716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.102 [2024-12-07 16:35:03.762726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:05.102 [2024-12-07 16:35:03.824302] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.362 16:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:05.362 00:09:05.362 real 0m4.171s 00:09:05.362 user 0m6.355s 00:09:05.362 sys 0m0.962s 00:09:05.362 16:35:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.362 ************************************ 00:09:05.362 END TEST raid_superblock_test 00:09:05.362 ************************************ 00:09:05.362 16:35:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.362 16:35:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:05.362 16:35:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:05.362 16:35:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.362 16:35:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.623 ************************************ 00:09:05.623 START TEST raid_read_error_test 00:09:05.623 ************************************ 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QxwDQlkh10 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78469 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78469 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78469 ']' 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.623 16:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.623 [2024-12-07 16:35:04.382294] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:05.623 [2024-12-07 16:35:04.382434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78469 ] 00:09:05.883 [2024-12-07 16:35:04.548649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.883 [2024-12-07 16:35:04.617459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.883 [2024-12-07 16:35:04.693401] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.883 [2024-12-07 16:35:04.693541] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.461 BaseBdev1_malloc 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.461 true 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.461 [2024-12-07 16:35:05.239821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:06.461 [2024-12-07 16:35:05.239945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.461 [2024-12-07 16:35:05.239996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:06.461 [2024-12-07 16:35:05.240025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.461 [2024-12-07 16:35:05.242601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.461 [2024-12-07 16:35:05.242673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:06.461 BaseBdev1 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:06.461 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.462 BaseBdev2_malloc 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.462 true 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.462 [2024-12-07 16:35:05.296889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:06.462 [2024-12-07 16:35:05.296991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.462 [2024-12-07 16:35:05.297032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:06.462 [2024-12-07 16:35:05.297061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.462 [2024-12-07 16:35:05.299640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.462 [2024-12-07 16:35:05.299710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:06.462 BaseBdev2 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.462 BaseBdev3_malloc 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.462 true 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.462 [2024-12-07 16:35:05.343729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:06.462 [2024-12-07 16:35:05.343817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.462 [2024-12-07 16:35:05.343853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:06.462 [2024-12-07 16:35:05.343881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.462 [2024-12-07 16:35:05.346271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.462 [2024-12-07 16:35:05.346303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:06.462 BaseBdev3 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.462 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.462 [2024-12-07 16:35:05.355783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.730 [2024-12-07 16:35:05.357932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.730 [2024-12-07 16:35:05.358054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.730 [2024-12-07 16:35:05.358261] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:06.730 [2024-12-07 16:35:05.358317] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:06.730 [2024-12-07 16:35:05.358625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:06.730 [2024-12-07 16:35:05.358812] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:06.730 [2024-12-07 16:35:05.358851] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:06.730 [2024-12-07 16:35:05.359062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.730 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.731 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.731 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.731 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.731 "name": "raid_bdev1", 00:09:06.731 "uuid": "fb14651d-870e-4dc9-859c-943dd172019e", 00:09:06.731 "strip_size_kb": 64, 00:09:06.731 "state": "online", 00:09:06.731 "raid_level": "concat", 00:09:06.731 "superblock": true, 00:09:06.731 "num_base_bdevs": 3, 00:09:06.731 "num_base_bdevs_discovered": 3, 00:09:06.731 "num_base_bdevs_operational": 3, 00:09:06.731 "base_bdevs_list": [ 00:09:06.731 { 00:09:06.731 "name": "BaseBdev1", 00:09:06.731 "uuid": "a53e87eb-3c99-589d-a9b2-3763a89d3fa8", 00:09:06.731 "is_configured": true, 00:09:06.731 "data_offset": 2048, 00:09:06.731 "data_size": 63488 00:09:06.731 }, 00:09:06.731 { 00:09:06.731 "name": "BaseBdev2", 00:09:06.731 "uuid": "53dd5c5a-b3eb-5546-b0d1-c6e7495bbd15", 00:09:06.731 "is_configured": true, 00:09:06.731 "data_offset": 2048, 00:09:06.731 "data_size": 63488 00:09:06.731 }, 00:09:06.731 { 00:09:06.731 "name": "BaseBdev3", 00:09:06.731 "uuid": "0b3410cd-cbb4-5c95-87d9-d3dcbc3f6bd5", 00:09:06.731 "is_configured": true, 00:09:06.731 "data_offset": 2048, 00:09:06.731 "data_size": 63488 00:09:06.731 } 00:09:06.731 ] 00:09:06.731 }' 00:09:06.731 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.731 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.989 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:06.990 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:07.248 [2024-12-07 16:35:05.891331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:08.188 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:08.188 16:35:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.188 16:35:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.188 16:35:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.188 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:08.188 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:08.188 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.189 "name": "raid_bdev1", 00:09:08.189 "uuid": "fb14651d-870e-4dc9-859c-943dd172019e", 00:09:08.189 "strip_size_kb": 64, 00:09:08.189 "state": "online", 00:09:08.189 "raid_level": "concat", 00:09:08.189 "superblock": true, 00:09:08.189 "num_base_bdevs": 3, 00:09:08.189 "num_base_bdevs_discovered": 3, 00:09:08.189 "num_base_bdevs_operational": 3, 00:09:08.189 "base_bdevs_list": [ 00:09:08.189 { 00:09:08.189 "name": "BaseBdev1", 00:09:08.189 "uuid": "a53e87eb-3c99-589d-a9b2-3763a89d3fa8", 00:09:08.189 "is_configured": true, 00:09:08.189 "data_offset": 2048, 00:09:08.189 "data_size": 63488 00:09:08.189 }, 00:09:08.189 { 00:09:08.189 "name": "BaseBdev2", 00:09:08.189 "uuid": "53dd5c5a-b3eb-5546-b0d1-c6e7495bbd15", 00:09:08.189 "is_configured": true, 00:09:08.189 "data_offset": 2048, 00:09:08.189 "data_size": 63488 00:09:08.189 }, 00:09:08.189 { 00:09:08.189 "name": "BaseBdev3", 00:09:08.189 "uuid": "0b3410cd-cbb4-5c95-87d9-d3dcbc3f6bd5", 00:09:08.189 "is_configured": true, 00:09:08.189 "data_offset": 2048, 00:09:08.189 "data_size": 63488 00:09:08.189 } 00:09:08.189 ] 00:09:08.189 }' 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.189 16:35:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.449 [2024-12-07 16:35:07.283906] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:08.449 [2024-12-07 16:35:07.283991] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.449 [2024-12-07 16:35:07.286422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.449 [2024-12-07 16:35:07.286516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.449 [2024-12-07 16:35:07.286572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.449 [2024-12-07 16:35:07.286613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:08.449 { 00:09:08.449 "results": [ 00:09:08.449 { 00:09:08.449 "job": "raid_bdev1", 00:09:08.449 "core_mask": "0x1", 00:09:08.449 "workload": "randrw", 00:09:08.449 "percentage": 50, 00:09:08.449 "status": "finished", 00:09:08.449 "queue_depth": 1, 00:09:08.449 "io_size": 131072, 00:09:08.449 "runtime": 1.393199, 00:09:08.449 "iops": 15063.174751058536, 00:09:08.449 "mibps": 1882.896843882317, 00:09:08.449 "io_failed": 1, 00:09:08.449 "io_timeout": 0, 00:09:08.449 "avg_latency_us": 93.24203001109233, 00:09:08.449 "min_latency_us": 24.705676855895195, 00:09:08.449 "max_latency_us": 1323.598253275109 00:09:08.449 } 00:09:08.449 ], 00:09:08.449 "core_count": 1 00:09:08.449 } 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78469 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78469 ']' 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78469 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78469 00:09:08.449 killing process with pid 78469 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78469' 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78469 00:09:08.449 [2024-12-07 16:35:07.334642] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.449 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78469 00:09:08.709 [2024-12-07 16:35:07.382675] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.970 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QxwDQlkh10 00:09:08.970 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:08.970 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:08.970 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:08.970 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:08.970 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.970 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.970 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:08.970 00:09:08.970 real 0m3.497s 00:09:08.970 user 0m4.266s 00:09:08.970 sys 0m0.661s 00:09:08.970 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.970 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.970 ************************************ 00:09:08.970 END TEST raid_read_error_test 00:09:08.970 ************************************ 00:09:08.970 16:35:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:08.970 16:35:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:08.970 16:35:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.970 16:35:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.970 ************************************ 00:09:08.970 START TEST raid_write_error_test 00:09:08.970 ************************************ 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pc8lbWEnFA 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78604 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78604 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78604 ']' 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.970 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.231 [2024-12-07 16:35:07.948018] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:09.231 [2024-12-07 16:35:07.948225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78604 ] 00:09:09.231 [2024-12-07 16:35:08.110951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.491 [2024-12-07 16:35:08.183394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.491 [2024-12-07 16:35:08.259617] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.491 [2024-12-07 16:35:08.259749] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.061 BaseBdev1_malloc 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.061 true 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.061 [2024-12-07 16:35:08.805976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:10.061 [2024-12-07 16:35:08.806035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.061 [2024-12-07 16:35:08.806055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:10.061 [2024-12-07 16:35:08.806064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.061 [2024-12-07 16:35:08.808549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.061 [2024-12-07 16:35:08.808581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:10.061 BaseBdev1 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.061 BaseBdev2_malloc 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.061 true 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.061 [2024-12-07 16:35:08.862174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:10.061 [2024-12-07 16:35:08.862223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.061 [2024-12-07 16:35:08.862241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:10.061 [2024-12-07 16:35:08.862249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.061 [2024-12-07 16:35:08.864637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.061 [2024-12-07 16:35:08.864708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:10.061 BaseBdev2 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.061 BaseBdev3_malloc 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.061 true 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.061 [2024-12-07 16:35:08.908703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:10.061 [2024-12-07 16:35:08.908746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.061 [2024-12-07 16:35:08.908765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:10.061 [2024-12-07 16:35:08.908774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.061 [2024-12-07 16:35:08.911033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.061 [2024-12-07 16:35:08.911067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:10.061 BaseBdev3 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.061 [2024-12-07 16:35:08.920756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.061 [2024-12-07 16:35:08.922799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.061 [2024-12-07 16:35:08.922877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.061 [2024-12-07 16:35:08.923076] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:10.061 [2024-12-07 16:35:08.923091] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.061 [2024-12-07 16:35:08.923335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:10.061 [2024-12-07 16:35:08.923479] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:10.061 [2024-12-07 16:35:08.923490] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:10.061 [2024-12-07 16:35:08.923612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.061 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.062 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.321 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.321 "name": "raid_bdev1", 00:09:10.321 "uuid": "7623faf5-156b-4af1-b718-496172700e44", 00:09:10.321 "strip_size_kb": 64, 00:09:10.321 "state": "online", 00:09:10.321 "raid_level": "concat", 00:09:10.321 "superblock": true, 00:09:10.321 "num_base_bdevs": 3, 00:09:10.321 "num_base_bdevs_discovered": 3, 00:09:10.321 "num_base_bdevs_operational": 3, 00:09:10.321 "base_bdevs_list": [ 00:09:10.321 { 00:09:10.321 "name": "BaseBdev1", 00:09:10.321 "uuid": "77f2c3c6-0f39-5e32-befd-f94e9d8ed78b", 00:09:10.321 "is_configured": true, 00:09:10.321 "data_offset": 2048, 00:09:10.322 "data_size": 63488 00:09:10.322 }, 00:09:10.322 { 00:09:10.322 "name": "BaseBdev2", 00:09:10.322 "uuid": "6c98ff39-a110-5edc-85ab-3c33346f2d35", 00:09:10.322 "is_configured": true, 00:09:10.322 "data_offset": 2048, 00:09:10.322 "data_size": 63488 00:09:10.322 }, 00:09:10.322 { 00:09:10.322 "name": "BaseBdev3", 00:09:10.322 "uuid": "916b4523-0a77-52ee-8837-e4ac6031f832", 00:09:10.322 "is_configured": true, 00:09:10.322 "data_offset": 2048, 00:09:10.322 "data_size": 63488 00:09:10.322 } 00:09:10.322 ] 00:09:10.322 }' 00:09:10.322 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.322 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.582 16:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:10.582 16:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:10.842 [2024-12-07 16:35:09.480415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.783 "name": "raid_bdev1", 00:09:11.783 "uuid": "7623faf5-156b-4af1-b718-496172700e44", 00:09:11.783 "strip_size_kb": 64, 00:09:11.783 "state": "online", 00:09:11.783 "raid_level": "concat", 00:09:11.783 "superblock": true, 00:09:11.783 "num_base_bdevs": 3, 00:09:11.783 "num_base_bdevs_discovered": 3, 00:09:11.783 "num_base_bdevs_operational": 3, 00:09:11.783 "base_bdevs_list": [ 00:09:11.783 { 00:09:11.783 "name": "BaseBdev1", 00:09:11.783 "uuid": "77f2c3c6-0f39-5e32-befd-f94e9d8ed78b", 00:09:11.783 "is_configured": true, 00:09:11.783 "data_offset": 2048, 00:09:11.783 "data_size": 63488 00:09:11.783 }, 00:09:11.783 { 00:09:11.783 "name": "BaseBdev2", 00:09:11.783 "uuid": "6c98ff39-a110-5edc-85ab-3c33346f2d35", 00:09:11.783 "is_configured": true, 00:09:11.783 "data_offset": 2048, 00:09:11.783 "data_size": 63488 00:09:11.783 }, 00:09:11.783 { 00:09:11.783 "name": "BaseBdev3", 00:09:11.783 "uuid": "916b4523-0a77-52ee-8837-e4ac6031f832", 00:09:11.783 "is_configured": true, 00:09:11.783 "data_offset": 2048, 00:09:11.783 "data_size": 63488 00:09:11.783 } 00:09:11.783 ] 00:09:11.783 }' 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.783 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.043 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:12.043 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.043 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.043 [2024-12-07 16:35:10.857113] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.043 [2024-12-07 16:35:10.857216] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.043 [2024-12-07 16:35:10.859800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.043 [2024-12-07 16:35:10.859887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.044 [2024-12-07 16:35:10.859942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.044 [2024-12-07 16:35:10.860005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:12.044 { 00:09:12.044 "results": [ 00:09:12.044 { 00:09:12.044 "job": "raid_bdev1", 00:09:12.044 "core_mask": "0x1", 00:09:12.044 "workload": "randrw", 00:09:12.044 "percentage": 50, 00:09:12.044 "status": "finished", 00:09:12.044 "queue_depth": 1, 00:09:12.044 "io_size": 131072, 00:09:12.044 "runtime": 1.377273, 00:09:12.044 "iops": 14605.673675444157, 00:09:12.044 "mibps": 1825.7092094305196, 00:09:12.044 "io_failed": 1, 00:09:12.044 "io_timeout": 0, 00:09:12.044 "avg_latency_us": 96.35313833289231, 00:09:12.044 "min_latency_us": 24.593886462882097, 00:09:12.044 "max_latency_us": 1337.907423580786 00:09:12.044 } 00:09:12.044 ], 00:09:12.044 "core_count": 1 00:09:12.044 } 00:09:12.044 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.044 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78604 00:09:12.044 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78604 ']' 00:09:12.044 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78604 00:09:12.044 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:12.044 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.044 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78604 00:09:12.044 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.044 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.044 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78604' 00:09:12.044 killing process with pid 78604 00:09:12.044 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78604 00:09:12.044 [2024-12-07 16:35:10.908810] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.044 16:35:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78604 00:09:12.304 [2024-12-07 16:35:10.955520] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.568 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:12.568 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pc8lbWEnFA 00:09:12.568 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:12.568 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:12.568 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:12.568 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:12.568 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:12.568 ************************************ 00:09:12.568 END TEST raid_write_error_test 00:09:12.568 ************************************ 00:09:12.568 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:12.568 00:09:12.568 real 0m3.484s 00:09:12.568 user 0m4.267s 00:09:12.568 sys 0m0.646s 00:09:12.568 16:35:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.568 16:35:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.568 16:35:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:12.568 16:35:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:12.568 16:35:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:12.568 16:35:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.568 16:35:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.568 ************************************ 00:09:12.568 START TEST raid_state_function_test 00:09:12.568 ************************************ 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:12.568 Process raid pid: 78736 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78736 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78736' 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78736 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78736 ']' 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.568 16:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.828 [2024-12-07 16:35:11.497905] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:12.828 [2024-12-07 16:35:11.498149] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.828 [2024-12-07 16:35:11.654061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.087 [2024-12-07 16:35:11.729565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.087 [2024-12-07 16:35:11.805690] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.087 [2024-12-07 16:35:11.805815] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.657 [2024-12-07 16:35:12.337060] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.657 [2024-12-07 16:35:12.337175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.657 [2024-12-07 16:35:12.337209] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.657 [2024-12-07 16:35:12.337233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.657 [2024-12-07 16:35:12.337250] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.657 [2024-12-07 16:35:12.337274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.657 "name": "Existed_Raid", 00:09:13.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.657 "strip_size_kb": 0, 00:09:13.657 "state": "configuring", 00:09:13.657 "raid_level": "raid1", 00:09:13.657 "superblock": false, 00:09:13.657 "num_base_bdevs": 3, 00:09:13.657 "num_base_bdevs_discovered": 0, 00:09:13.657 "num_base_bdevs_operational": 3, 00:09:13.657 "base_bdevs_list": [ 00:09:13.657 { 00:09:13.657 "name": "BaseBdev1", 00:09:13.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.657 "is_configured": false, 00:09:13.657 "data_offset": 0, 00:09:13.657 "data_size": 0 00:09:13.657 }, 00:09:13.657 { 00:09:13.657 "name": "BaseBdev2", 00:09:13.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.657 "is_configured": false, 00:09:13.657 "data_offset": 0, 00:09:13.657 "data_size": 0 00:09:13.657 }, 00:09:13.657 { 00:09:13.657 "name": "BaseBdev3", 00:09:13.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.657 "is_configured": false, 00:09:13.657 "data_offset": 0, 00:09:13.657 "data_size": 0 00:09:13.657 } 00:09:13.657 ] 00:09:13.657 }' 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.657 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.918 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.918 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.918 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.918 [2024-12-07 16:35:12.792243] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.918 [2024-12-07 16:35:12.792408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:13.918 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.918 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:13.918 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.918 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.918 [2024-12-07 16:35:12.804264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.918 [2024-12-07 16:35:12.804370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.918 [2024-12-07 16:35:12.804408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.918 [2024-12-07 16:35:12.804433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.918 [2024-12-07 16:35:12.804459] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.918 [2024-12-07 16:35:12.804483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.918 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.918 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.918 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.918 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.178 [2024-12-07 16:35:12.831939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.178 BaseBdev1 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.178 [ 00:09:14.178 { 00:09:14.178 "name": "BaseBdev1", 00:09:14.178 "aliases": [ 00:09:14.178 "d9baa435-6ef1-4549-b188-bfbb69b3d29a" 00:09:14.178 ], 00:09:14.178 "product_name": "Malloc disk", 00:09:14.178 "block_size": 512, 00:09:14.178 "num_blocks": 65536, 00:09:14.178 "uuid": "d9baa435-6ef1-4549-b188-bfbb69b3d29a", 00:09:14.178 "assigned_rate_limits": { 00:09:14.178 "rw_ios_per_sec": 0, 00:09:14.178 "rw_mbytes_per_sec": 0, 00:09:14.178 "r_mbytes_per_sec": 0, 00:09:14.178 "w_mbytes_per_sec": 0 00:09:14.178 }, 00:09:14.178 "claimed": true, 00:09:14.178 "claim_type": "exclusive_write", 00:09:14.178 "zoned": false, 00:09:14.178 "supported_io_types": { 00:09:14.178 "read": true, 00:09:14.178 "write": true, 00:09:14.178 "unmap": true, 00:09:14.178 "flush": true, 00:09:14.178 "reset": true, 00:09:14.178 "nvme_admin": false, 00:09:14.178 "nvme_io": false, 00:09:14.178 "nvme_io_md": false, 00:09:14.178 "write_zeroes": true, 00:09:14.178 "zcopy": true, 00:09:14.178 "get_zone_info": false, 00:09:14.178 "zone_management": false, 00:09:14.178 "zone_append": false, 00:09:14.178 "compare": false, 00:09:14.178 "compare_and_write": false, 00:09:14.178 "abort": true, 00:09:14.178 "seek_hole": false, 00:09:14.178 "seek_data": false, 00:09:14.178 "copy": true, 00:09:14.178 "nvme_iov_md": false 00:09:14.178 }, 00:09:14.178 "memory_domains": [ 00:09:14.178 { 00:09:14.178 "dma_device_id": "system", 00:09:14.178 "dma_device_type": 1 00:09:14.178 }, 00:09:14.178 { 00:09:14.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.178 "dma_device_type": 2 00:09:14.178 } 00:09:14.178 ], 00:09:14.178 "driver_specific": {} 00:09:14.178 } 00:09:14.178 ] 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.178 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.179 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.179 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.179 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.179 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.179 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.179 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.179 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.179 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.179 "name": "Existed_Raid", 00:09:14.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.179 "strip_size_kb": 0, 00:09:14.179 "state": "configuring", 00:09:14.179 "raid_level": "raid1", 00:09:14.179 "superblock": false, 00:09:14.179 "num_base_bdevs": 3, 00:09:14.179 "num_base_bdevs_discovered": 1, 00:09:14.179 "num_base_bdevs_operational": 3, 00:09:14.179 "base_bdevs_list": [ 00:09:14.179 { 00:09:14.179 "name": "BaseBdev1", 00:09:14.179 "uuid": "d9baa435-6ef1-4549-b188-bfbb69b3d29a", 00:09:14.179 "is_configured": true, 00:09:14.179 "data_offset": 0, 00:09:14.179 "data_size": 65536 00:09:14.179 }, 00:09:14.179 { 00:09:14.179 "name": "BaseBdev2", 00:09:14.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.179 "is_configured": false, 00:09:14.179 "data_offset": 0, 00:09:14.179 "data_size": 0 00:09:14.179 }, 00:09:14.179 { 00:09:14.179 "name": "BaseBdev3", 00:09:14.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.179 "is_configured": false, 00:09:14.179 "data_offset": 0, 00:09:14.179 "data_size": 0 00:09:14.179 } 00:09:14.179 ] 00:09:14.179 }' 00:09:14.179 16:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.179 16:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.452 [2024-12-07 16:35:13.319187] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.452 [2024-12-07 16:35:13.319326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.452 [2024-12-07 16:35:13.331243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.452 [2024-12-07 16:35:13.333423] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.452 [2024-12-07 16:35:13.333507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.452 [2024-12-07 16:35:13.333521] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.452 [2024-12-07 16:35:13.333531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.452 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.712 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.712 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.712 "name": "Existed_Raid", 00:09:14.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.712 "strip_size_kb": 0, 00:09:14.712 "state": "configuring", 00:09:14.712 "raid_level": "raid1", 00:09:14.712 "superblock": false, 00:09:14.712 "num_base_bdevs": 3, 00:09:14.712 "num_base_bdevs_discovered": 1, 00:09:14.712 "num_base_bdevs_operational": 3, 00:09:14.712 "base_bdevs_list": [ 00:09:14.712 { 00:09:14.712 "name": "BaseBdev1", 00:09:14.712 "uuid": "d9baa435-6ef1-4549-b188-bfbb69b3d29a", 00:09:14.712 "is_configured": true, 00:09:14.712 "data_offset": 0, 00:09:14.712 "data_size": 65536 00:09:14.712 }, 00:09:14.712 { 00:09:14.712 "name": "BaseBdev2", 00:09:14.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.712 "is_configured": false, 00:09:14.712 "data_offset": 0, 00:09:14.712 "data_size": 0 00:09:14.712 }, 00:09:14.712 { 00:09:14.712 "name": "BaseBdev3", 00:09:14.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.712 "is_configured": false, 00:09:14.712 "data_offset": 0, 00:09:14.712 "data_size": 0 00:09:14.712 } 00:09:14.712 ] 00:09:14.712 }' 00:09:14.712 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.712 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.972 [2024-12-07 16:35:13.804501] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.972 BaseBdev2 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.972 [ 00:09:14.972 { 00:09:14.972 "name": "BaseBdev2", 00:09:14.972 "aliases": [ 00:09:14.972 "ca6acbb4-99b5-4490-b139-8d8fae4f6d6b" 00:09:14.972 ], 00:09:14.972 "product_name": "Malloc disk", 00:09:14.972 "block_size": 512, 00:09:14.972 "num_blocks": 65536, 00:09:14.972 "uuid": "ca6acbb4-99b5-4490-b139-8d8fae4f6d6b", 00:09:14.972 "assigned_rate_limits": { 00:09:14.972 "rw_ios_per_sec": 0, 00:09:14.972 "rw_mbytes_per_sec": 0, 00:09:14.972 "r_mbytes_per_sec": 0, 00:09:14.972 "w_mbytes_per_sec": 0 00:09:14.972 }, 00:09:14.972 "claimed": true, 00:09:14.972 "claim_type": "exclusive_write", 00:09:14.972 "zoned": false, 00:09:14.972 "supported_io_types": { 00:09:14.972 "read": true, 00:09:14.972 "write": true, 00:09:14.972 "unmap": true, 00:09:14.972 "flush": true, 00:09:14.972 "reset": true, 00:09:14.972 "nvme_admin": false, 00:09:14.972 "nvme_io": false, 00:09:14.972 "nvme_io_md": false, 00:09:14.972 "write_zeroes": true, 00:09:14.972 "zcopy": true, 00:09:14.972 "get_zone_info": false, 00:09:14.972 "zone_management": false, 00:09:14.972 "zone_append": false, 00:09:14.972 "compare": false, 00:09:14.972 "compare_and_write": false, 00:09:14.972 "abort": true, 00:09:14.972 "seek_hole": false, 00:09:14.972 "seek_data": false, 00:09:14.972 "copy": true, 00:09:14.972 "nvme_iov_md": false 00:09:14.972 }, 00:09:14.972 "memory_domains": [ 00:09:14.972 { 00:09:14.972 "dma_device_id": "system", 00:09:14.972 "dma_device_type": 1 00:09:14.972 }, 00:09:14.972 { 00:09:14.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.972 "dma_device_type": 2 00:09:14.972 } 00:09:14.972 ], 00:09:14.972 "driver_specific": {} 00:09:14.972 } 00:09:14.972 ] 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.972 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.232 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.232 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.232 "name": "Existed_Raid", 00:09:15.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.232 "strip_size_kb": 0, 00:09:15.232 "state": "configuring", 00:09:15.232 "raid_level": "raid1", 00:09:15.232 "superblock": false, 00:09:15.232 "num_base_bdevs": 3, 00:09:15.232 "num_base_bdevs_discovered": 2, 00:09:15.232 "num_base_bdevs_operational": 3, 00:09:15.232 "base_bdevs_list": [ 00:09:15.232 { 00:09:15.232 "name": "BaseBdev1", 00:09:15.232 "uuid": "d9baa435-6ef1-4549-b188-bfbb69b3d29a", 00:09:15.232 "is_configured": true, 00:09:15.232 "data_offset": 0, 00:09:15.232 "data_size": 65536 00:09:15.232 }, 00:09:15.232 { 00:09:15.232 "name": "BaseBdev2", 00:09:15.232 "uuid": "ca6acbb4-99b5-4490-b139-8d8fae4f6d6b", 00:09:15.232 "is_configured": true, 00:09:15.232 "data_offset": 0, 00:09:15.232 "data_size": 65536 00:09:15.232 }, 00:09:15.232 { 00:09:15.232 "name": "BaseBdev3", 00:09:15.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.232 "is_configured": false, 00:09:15.232 "data_offset": 0, 00:09:15.232 "data_size": 0 00:09:15.232 } 00:09:15.232 ] 00:09:15.232 }' 00:09:15.232 16:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.232 16:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.503 [2024-12-07 16:35:14.300462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.503 [2024-12-07 16:35:14.300577] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:15.503 [2024-12-07 16:35:14.300605] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:15.503 [2024-12-07 16:35:14.300961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:15.503 [2024-12-07 16:35:14.301158] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:15.503 [2024-12-07 16:35:14.301197] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:15.503 [2024-12-07 16:35:14.301468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.503 BaseBdev3 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.503 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.503 [ 00:09:15.503 { 00:09:15.503 "name": "BaseBdev3", 00:09:15.503 "aliases": [ 00:09:15.503 "236a7793-80ef-4eb5-8abc-e88872536cbb" 00:09:15.503 ], 00:09:15.503 "product_name": "Malloc disk", 00:09:15.503 "block_size": 512, 00:09:15.503 "num_blocks": 65536, 00:09:15.503 "uuid": "236a7793-80ef-4eb5-8abc-e88872536cbb", 00:09:15.503 "assigned_rate_limits": { 00:09:15.503 "rw_ios_per_sec": 0, 00:09:15.503 "rw_mbytes_per_sec": 0, 00:09:15.503 "r_mbytes_per_sec": 0, 00:09:15.503 "w_mbytes_per_sec": 0 00:09:15.503 }, 00:09:15.503 "claimed": true, 00:09:15.503 "claim_type": "exclusive_write", 00:09:15.503 "zoned": false, 00:09:15.504 "supported_io_types": { 00:09:15.504 "read": true, 00:09:15.504 "write": true, 00:09:15.504 "unmap": true, 00:09:15.504 "flush": true, 00:09:15.504 "reset": true, 00:09:15.504 "nvme_admin": false, 00:09:15.504 "nvme_io": false, 00:09:15.504 "nvme_io_md": false, 00:09:15.504 "write_zeroes": true, 00:09:15.504 "zcopy": true, 00:09:15.504 "get_zone_info": false, 00:09:15.504 "zone_management": false, 00:09:15.504 "zone_append": false, 00:09:15.504 "compare": false, 00:09:15.504 "compare_and_write": false, 00:09:15.504 "abort": true, 00:09:15.504 "seek_hole": false, 00:09:15.504 "seek_data": false, 00:09:15.504 "copy": true, 00:09:15.504 "nvme_iov_md": false 00:09:15.504 }, 00:09:15.504 "memory_domains": [ 00:09:15.504 { 00:09:15.504 "dma_device_id": "system", 00:09:15.504 "dma_device_type": 1 00:09:15.504 }, 00:09:15.504 { 00:09:15.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.504 "dma_device_type": 2 00:09:15.504 } 00:09:15.504 ], 00:09:15.504 "driver_specific": {} 00:09:15.504 } 00:09:15.504 ] 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.504 "name": "Existed_Raid", 00:09:15.504 "uuid": "6c10cf05-55d6-4068-be53-7ec3e083a8ef", 00:09:15.504 "strip_size_kb": 0, 00:09:15.504 "state": "online", 00:09:15.504 "raid_level": "raid1", 00:09:15.504 "superblock": false, 00:09:15.504 "num_base_bdevs": 3, 00:09:15.504 "num_base_bdevs_discovered": 3, 00:09:15.504 "num_base_bdevs_operational": 3, 00:09:15.504 "base_bdevs_list": [ 00:09:15.504 { 00:09:15.504 "name": "BaseBdev1", 00:09:15.504 "uuid": "d9baa435-6ef1-4549-b188-bfbb69b3d29a", 00:09:15.504 "is_configured": true, 00:09:15.504 "data_offset": 0, 00:09:15.504 "data_size": 65536 00:09:15.504 }, 00:09:15.504 { 00:09:15.504 "name": "BaseBdev2", 00:09:15.504 "uuid": "ca6acbb4-99b5-4490-b139-8d8fae4f6d6b", 00:09:15.504 "is_configured": true, 00:09:15.504 "data_offset": 0, 00:09:15.504 "data_size": 65536 00:09:15.504 }, 00:09:15.504 { 00:09:15.504 "name": "BaseBdev3", 00:09:15.504 "uuid": "236a7793-80ef-4eb5-8abc-e88872536cbb", 00:09:15.504 "is_configured": true, 00:09:15.504 "data_offset": 0, 00:09:15.504 "data_size": 65536 00:09:15.504 } 00:09:15.504 ] 00:09:15.504 }' 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.504 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.072 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.072 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.072 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.072 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.072 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.072 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.072 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.072 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.072 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.072 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.072 [2024-12-07 16:35:14.740045] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.072 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.072 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.072 "name": "Existed_Raid", 00:09:16.072 "aliases": [ 00:09:16.072 "6c10cf05-55d6-4068-be53-7ec3e083a8ef" 00:09:16.072 ], 00:09:16.072 "product_name": "Raid Volume", 00:09:16.072 "block_size": 512, 00:09:16.072 "num_blocks": 65536, 00:09:16.072 "uuid": "6c10cf05-55d6-4068-be53-7ec3e083a8ef", 00:09:16.072 "assigned_rate_limits": { 00:09:16.072 "rw_ios_per_sec": 0, 00:09:16.072 "rw_mbytes_per_sec": 0, 00:09:16.072 "r_mbytes_per_sec": 0, 00:09:16.072 "w_mbytes_per_sec": 0 00:09:16.072 }, 00:09:16.072 "claimed": false, 00:09:16.073 "zoned": false, 00:09:16.073 "supported_io_types": { 00:09:16.073 "read": true, 00:09:16.073 "write": true, 00:09:16.073 "unmap": false, 00:09:16.073 "flush": false, 00:09:16.073 "reset": true, 00:09:16.073 "nvme_admin": false, 00:09:16.073 "nvme_io": false, 00:09:16.073 "nvme_io_md": false, 00:09:16.073 "write_zeroes": true, 00:09:16.073 "zcopy": false, 00:09:16.073 "get_zone_info": false, 00:09:16.073 "zone_management": false, 00:09:16.073 "zone_append": false, 00:09:16.073 "compare": false, 00:09:16.073 "compare_and_write": false, 00:09:16.073 "abort": false, 00:09:16.073 "seek_hole": false, 00:09:16.073 "seek_data": false, 00:09:16.073 "copy": false, 00:09:16.073 "nvme_iov_md": false 00:09:16.073 }, 00:09:16.073 "memory_domains": [ 00:09:16.073 { 00:09:16.073 "dma_device_id": "system", 00:09:16.073 "dma_device_type": 1 00:09:16.073 }, 00:09:16.073 { 00:09:16.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.073 "dma_device_type": 2 00:09:16.073 }, 00:09:16.073 { 00:09:16.073 "dma_device_id": "system", 00:09:16.073 "dma_device_type": 1 00:09:16.073 }, 00:09:16.073 { 00:09:16.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.073 "dma_device_type": 2 00:09:16.073 }, 00:09:16.073 { 00:09:16.073 "dma_device_id": "system", 00:09:16.073 "dma_device_type": 1 00:09:16.073 }, 00:09:16.073 { 00:09:16.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.073 "dma_device_type": 2 00:09:16.073 } 00:09:16.073 ], 00:09:16.073 "driver_specific": { 00:09:16.073 "raid": { 00:09:16.073 "uuid": "6c10cf05-55d6-4068-be53-7ec3e083a8ef", 00:09:16.073 "strip_size_kb": 0, 00:09:16.073 "state": "online", 00:09:16.073 "raid_level": "raid1", 00:09:16.073 "superblock": false, 00:09:16.073 "num_base_bdevs": 3, 00:09:16.073 "num_base_bdevs_discovered": 3, 00:09:16.073 "num_base_bdevs_operational": 3, 00:09:16.073 "base_bdevs_list": [ 00:09:16.073 { 00:09:16.073 "name": "BaseBdev1", 00:09:16.073 "uuid": "d9baa435-6ef1-4549-b188-bfbb69b3d29a", 00:09:16.073 "is_configured": true, 00:09:16.073 "data_offset": 0, 00:09:16.073 "data_size": 65536 00:09:16.073 }, 00:09:16.073 { 00:09:16.073 "name": "BaseBdev2", 00:09:16.073 "uuid": "ca6acbb4-99b5-4490-b139-8d8fae4f6d6b", 00:09:16.073 "is_configured": true, 00:09:16.073 "data_offset": 0, 00:09:16.073 "data_size": 65536 00:09:16.073 }, 00:09:16.073 { 00:09:16.073 "name": "BaseBdev3", 00:09:16.073 "uuid": "236a7793-80ef-4eb5-8abc-e88872536cbb", 00:09:16.073 "is_configured": true, 00:09:16.073 "data_offset": 0, 00:09:16.073 "data_size": 65536 00:09:16.073 } 00:09:16.073 ] 00:09:16.073 } 00:09:16.073 } 00:09:16.073 }' 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.073 BaseBdev2 00:09:16.073 BaseBdev3' 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.073 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.333 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.333 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.333 16:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.333 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.333 16:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.333 [2024-12-07 16:35:14.995363] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.333 "name": "Existed_Raid", 00:09:16.333 "uuid": "6c10cf05-55d6-4068-be53-7ec3e083a8ef", 00:09:16.333 "strip_size_kb": 0, 00:09:16.333 "state": "online", 00:09:16.333 "raid_level": "raid1", 00:09:16.333 "superblock": false, 00:09:16.333 "num_base_bdevs": 3, 00:09:16.333 "num_base_bdevs_discovered": 2, 00:09:16.333 "num_base_bdevs_operational": 2, 00:09:16.333 "base_bdevs_list": [ 00:09:16.333 { 00:09:16.333 "name": null, 00:09:16.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.333 "is_configured": false, 00:09:16.333 "data_offset": 0, 00:09:16.333 "data_size": 65536 00:09:16.333 }, 00:09:16.333 { 00:09:16.333 "name": "BaseBdev2", 00:09:16.333 "uuid": "ca6acbb4-99b5-4490-b139-8d8fae4f6d6b", 00:09:16.333 "is_configured": true, 00:09:16.333 "data_offset": 0, 00:09:16.333 "data_size": 65536 00:09:16.333 }, 00:09:16.333 { 00:09:16.333 "name": "BaseBdev3", 00:09:16.333 "uuid": "236a7793-80ef-4eb5-8abc-e88872536cbb", 00:09:16.333 "is_configured": true, 00:09:16.333 "data_offset": 0, 00:09:16.333 "data_size": 65536 00:09:16.333 } 00:09:16.333 ] 00:09:16.333 }' 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.333 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.592 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:16.592 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.592 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.592 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.592 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.592 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:16.592 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.851 [2024-12-07 16:35:15.523174] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.851 [2024-12-07 16:35:15.591705] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:16.851 [2024-12-07 16:35:15.591811] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.851 [2024-12-07 16:35:15.612993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.851 [2024-12-07 16:35:15.613054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.851 [2024-12-07 16:35:15.613073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.851 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.852 BaseBdev2 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.852 [ 00:09:16.852 { 00:09:16.852 "name": "BaseBdev2", 00:09:16.852 "aliases": [ 00:09:16.852 "71fdac74-7348-4406-b218-8106ffdecde9" 00:09:16.852 ], 00:09:16.852 "product_name": "Malloc disk", 00:09:16.852 "block_size": 512, 00:09:16.852 "num_blocks": 65536, 00:09:16.852 "uuid": "71fdac74-7348-4406-b218-8106ffdecde9", 00:09:16.852 "assigned_rate_limits": { 00:09:16.852 "rw_ios_per_sec": 0, 00:09:16.852 "rw_mbytes_per_sec": 0, 00:09:16.852 "r_mbytes_per_sec": 0, 00:09:16.852 "w_mbytes_per_sec": 0 00:09:16.852 }, 00:09:16.852 "claimed": false, 00:09:16.852 "zoned": false, 00:09:16.852 "supported_io_types": { 00:09:16.852 "read": true, 00:09:16.852 "write": true, 00:09:16.852 "unmap": true, 00:09:16.852 "flush": true, 00:09:16.852 "reset": true, 00:09:16.852 "nvme_admin": false, 00:09:16.852 "nvme_io": false, 00:09:16.852 "nvme_io_md": false, 00:09:16.852 "write_zeroes": true, 00:09:16.852 "zcopy": true, 00:09:16.852 "get_zone_info": false, 00:09:16.852 "zone_management": false, 00:09:16.852 "zone_append": false, 00:09:16.852 "compare": false, 00:09:16.852 "compare_and_write": false, 00:09:16.852 "abort": true, 00:09:16.852 "seek_hole": false, 00:09:16.852 "seek_data": false, 00:09:16.852 "copy": true, 00:09:16.852 "nvme_iov_md": false 00:09:16.852 }, 00:09:16.852 "memory_domains": [ 00:09:16.852 { 00:09:16.852 "dma_device_id": "system", 00:09:16.852 "dma_device_type": 1 00:09:16.852 }, 00:09:16.852 { 00:09:16.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.852 "dma_device_type": 2 00:09:16.852 } 00:09:16.852 ], 00:09:16.852 "driver_specific": {} 00:09:16.852 } 00:09:16.852 ] 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.852 BaseBdev3 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.852 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.112 [ 00:09:17.112 { 00:09:17.112 "name": "BaseBdev3", 00:09:17.112 "aliases": [ 00:09:17.112 "3e86b515-4cc2-4eb8-9cb0-b8ca79181d66" 00:09:17.112 ], 00:09:17.112 "product_name": "Malloc disk", 00:09:17.112 "block_size": 512, 00:09:17.112 "num_blocks": 65536, 00:09:17.112 "uuid": "3e86b515-4cc2-4eb8-9cb0-b8ca79181d66", 00:09:17.112 "assigned_rate_limits": { 00:09:17.112 "rw_ios_per_sec": 0, 00:09:17.112 "rw_mbytes_per_sec": 0, 00:09:17.112 "r_mbytes_per_sec": 0, 00:09:17.112 "w_mbytes_per_sec": 0 00:09:17.112 }, 00:09:17.112 "claimed": false, 00:09:17.112 "zoned": false, 00:09:17.112 "supported_io_types": { 00:09:17.112 "read": true, 00:09:17.112 "write": true, 00:09:17.112 "unmap": true, 00:09:17.112 "flush": true, 00:09:17.112 "reset": true, 00:09:17.112 "nvme_admin": false, 00:09:17.112 "nvme_io": false, 00:09:17.112 "nvme_io_md": false, 00:09:17.112 "write_zeroes": true, 00:09:17.112 "zcopy": true, 00:09:17.112 "get_zone_info": false, 00:09:17.112 "zone_management": false, 00:09:17.112 "zone_append": false, 00:09:17.112 "compare": false, 00:09:17.112 "compare_and_write": false, 00:09:17.112 "abort": true, 00:09:17.112 "seek_hole": false, 00:09:17.112 "seek_data": false, 00:09:17.112 "copy": true, 00:09:17.112 "nvme_iov_md": false 00:09:17.112 }, 00:09:17.112 "memory_domains": [ 00:09:17.112 { 00:09:17.112 "dma_device_id": "system", 00:09:17.112 "dma_device_type": 1 00:09:17.112 }, 00:09:17.112 { 00:09:17.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.112 "dma_device_type": 2 00:09:17.112 } 00:09:17.112 ], 00:09:17.112 "driver_specific": {} 00:09:17.112 } 00:09:17.112 ] 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.112 [2024-12-07 16:35:15.781051] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.112 [2024-12-07 16:35:15.781148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.112 [2024-12-07 16:35:15.781190] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.112 [2024-12-07 16:35:15.783475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.112 "name": "Existed_Raid", 00:09:17.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.112 "strip_size_kb": 0, 00:09:17.112 "state": "configuring", 00:09:17.112 "raid_level": "raid1", 00:09:17.112 "superblock": false, 00:09:17.112 "num_base_bdevs": 3, 00:09:17.112 "num_base_bdevs_discovered": 2, 00:09:17.112 "num_base_bdevs_operational": 3, 00:09:17.112 "base_bdevs_list": [ 00:09:17.112 { 00:09:17.112 "name": "BaseBdev1", 00:09:17.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.112 "is_configured": false, 00:09:17.112 "data_offset": 0, 00:09:17.112 "data_size": 0 00:09:17.112 }, 00:09:17.112 { 00:09:17.112 "name": "BaseBdev2", 00:09:17.112 "uuid": "71fdac74-7348-4406-b218-8106ffdecde9", 00:09:17.112 "is_configured": true, 00:09:17.112 "data_offset": 0, 00:09:17.112 "data_size": 65536 00:09:17.112 }, 00:09:17.112 { 00:09:17.112 "name": "BaseBdev3", 00:09:17.112 "uuid": "3e86b515-4cc2-4eb8-9cb0-b8ca79181d66", 00:09:17.112 "is_configured": true, 00:09:17.112 "data_offset": 0, 00:09:17.112 "data_size": 65536 00:09:17.112 } 00:09:17.112 ] 00:09:17.112 }' 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.112 16:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.384 [2024-12-07 16:35:16.228307] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.384 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.644 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.644 "name": "Existed_Raid", 00:09:17.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.644 "strip_size_kb": 0, 00:09:17.644 "state": "configuring", 00:09:17.644 "raid_level": "raid1", 00:09:17.644 "superblock": false, 00:09:17.644 "num_base_bdevs": 3, 00:09:17.644 "num_base_bdevs_discovered": 1, 00:09:17.644 "num_base_bdevs_operational": 3, 00:09:17.644 "base_bdevs_list": [ 00:09:17.644 { 00:09:17.644 "name": "BaseBdev1", 00:09:17.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.644 "is_configured": false, 00:09:17.644 "data_offset": 0, 00:09:17.644 "data_size": 0 00:09:17.644 }, 00:09:17.644 { 00:09:17.644 "name": null, 00:09:17.644 "uuid": "71fdac74-7348-4406-b218-8106ffdecde9", 00:09:17.644 "is_configured": false, 00:09:17.644 "data_offset": 0, 00:09:17.644 "data_size": 65536 00:09:17.644 }, 00:09:17.644 { 00:09:17.644 "name": "BaseBdev3", 00:09:17.644 "uuid": "3e86b515-4cc2-4eb8-9cb0-b8ca79181d66", 00:09:17.644 "is_configured": true, 00:09:17.644 "data_offset": 0, 00:09:17.644 "data_size": 65536 00:09:17.644 } 00:09:17.644 ] 00:09:17.644 }' 00:09:17.644 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.644 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 [2024-12-07 16:35:16.696514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.905 BaseBdev1 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 [ 00:09:17.905 { 00:09:17.905 "name": "BaseBdev1", 00:09:17.905 "aliases": [ 00:09:17.905 "97e785c9-9b4b-4a74-8f6b-e50e6214bf8c" 00:09:17.905 ], 00:09:17.905 "product_name": "Malloc disk", 00:09:17.905 "block_size": 512, 00:09:17.905 "num_blocks": 65536, 00:09:17.905 "uuid": "97e785c9-9b4b-4a74-8f6b-e50e6214bf8c", 00:09:17.905 "assigned_rate_limits": { 00:09:17.905 "rw_ios_per_sec": 0, 00:09:17.905 "rw_mbytes_per_sec": 0, 00:09:17.905 "r_mbytes_per_sec": 0, 00:09:17.905 "w_mbytes_per_sec": 0 00:09:17.905 }, 00:09:17.905 "claimed": true, 00:09:17.905 "claim_type": "exclusive_write", 00:09:17.905 "zoned": false, 00:09:17.905 "supported_io_types": { 00:09:17.905 "read": true, 00:09:17.905 "write": true, 00:09:17.905 "unmap": true, 00:09:17.905 "flush": true, 00:09:17.905 "reset": true, 00:09:17.905 "nvme_admin": false, 00:09:17.905 "nvme_io": false, 00:09:17.905 "nvme_io_md": false, 00:09:17.905 "write_zeroes": true, 00:09:17.905 "zcopy": true, 00:09:17.905 "get_zone_info": false, 00:09:17.905 "zone_management": false, 00:09:17.905 "zone_append": false, 00:09:17.905 "compare": false, 00:09:17.905 "compare_and_write": false, 00:09:17.905 "abort": true, 00:09:17.905 "seek_hole": false, 00:09:17.905 "seek_data": false, 00:09:17.905 "copy": true, 00:09:17.905 "nvme_iov_md": false 00:09:17.905 }, 00:09:17.905 "memory_domains": [ 00:09:17.905 { 00:09:17.905 "dma_device_id": "system", 00:09:17.905 "dma_device_type": 1 00:09:17.905 }, 00:09:17.905 { 00:09:17.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.905 "dma_device_type": 2 00:09:17.905 } 00:09:17.905 ], 00:09:17.905 "driver_specific": {} 00:09:17.905 } 00:09:17.905 ] 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.905 "name": "Existed_Raid", 00:09:17.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.905 "strip_size_kb": 0, 00:09:17.905 "state": "configuring", 00:09:17.905 "raid_level": "raid1", 00:09:17.905 "superblock": false, 00:09:17.905 "num_base_bdevs": 3, 00:09:17.905 "num_base_bdevs_discovered": 2, 00:09:17.905 "num_base_bdevs_operational": 3, 00:09:17.905 "base_bdevs_list": [ 00:09:17.905 { 00:09:17.905 "name": "BaseBdev1", 00:09:17.905 "uuid": "97e785c9-9b4b-4a74-8f6b-e50e6214bf8c", 00:09:17.905 "is_configured": true, 00:09:17.905 "data_offset": 0, 00:09:17.905 "data_size": 65536 00:09:17.905 }, 00:09:17.905 { 00:09:17.905 "name": null, 00:09:17.905 "uuid": "71fdac74-7348-4406-b218-8106ffdecde9", 00:09:17.905 "is_configured": false, 00:09:17.905 "data_offset": 0, 00:09:17.905 "data_size": 65536 00:09:17.905 }, 00:09:17.905 { 00:09:17.905 "name": "BaseBdev3", 00:09:17.905 "uuid": "3e86b515-4cc2-4eb8-9cb0-b8ca79181d66", 00:09:17.905 "is_configured": true, 00:09:17.905 "data_offset": 0, 00:09:17.905 "data_size": 65536 00:09:17.905 } 00:09:17.905 ] 00:09:17.905 }' 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.905 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.475 [2024-12-07 16:35:17.231654] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.475 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.476 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.476 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.476 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.476 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.476 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.476 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.476 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.476 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.476 "name": "Existed_Raid", 00:09:18.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.476 "strip_size_kb": 0, 00:09:18.476 "state": "configuring", 00:09:18.476 "raid_level": "raid1", 00:09:18.476 "superblock": false, 00:09:18.476 "num_base_bdevs": 3, 00:09:18.476 "num_base_bdevs_discovered": 1, 00:09:18.476 "num_base_bdevs_operational": 3, 00:09:18.476 "base_bdevs_list": [ 00:09:18.476 { 00:09:18.476 "name": "BaseBdev1", 00:09:18.476 "uuid": "97e785c9-9b4b-4a74-8f6b-e50e6214bf8c", 00:09:18.476 "is_configured": true, 00:09:18.476 "data_offset": 0, 00:09:18.476 "data_size": 65536 00:09:18.476 }, 00:09:18.476 { 00:09:18.476 "name": null, 00:09:18.476 "uuid": "71fdac74-7348-4406-b218-8106ffdecde9", 00:09:18.476 "is_configured": false, 00:09:18.476 "data_offset": 0, 00:09:18.476 "data_size": 65536 00:09:18.476 }, 00:09:18.476 { 00:09:18.476 "name": null, 00:09:18.476 "uuid": "3e86b515-4cc2-4eb8-9cb0-b8ca79181d66", 00:09:18.476 "is_configured": false, 00:09:18.476 "data_offset": 0, 00:09:18.476 "data_size": 65536 00:09:18.476 } 00:09:18.476 ] 00:09:18.476 }' 00:09:18.476 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.476 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.044 [2024-12-07 16:35:17.742886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.044 "name": "Existed_Raid", 00:09:19.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.044 "strip_size_kb": 0, 00:09:19.044 "state": "configuring", 00:09:19.044 "raid_level": "raid1", 00:09:19.044 "superblock": false, 00:09:19.044 "num_base_bdevs": 3, 00:09:19.044 "num_base_bdevs_discovered": 2, 00:09:19.044 "num_base_bdevs_operational": 3, 00:09:19.044 "base_bdevs_list": [ 00:09:19.044 { 00:09:19.044 "name": "BaseBdev1", 00:09:19.044 "uuid": "97e785c9-9b4b-4a74-8f6b-e50e6214bf8c", 00:09:19.044 "is_configured": true, 00:09:19.044 "data_offset": 0, 00:09:19.044 "data_size": 65536 00:09:19.044 }, 00:09:19.044 { 00:09:19.044 "name": null, 00:09:19.044 "uuid": "71fdac74-7348-4406-b218-8106ffdecde9", 00:09:19.044 "is_configured": false, 00:09:19.044 "data_offset": 0, 00:09:19.044 "data_size": 65536 00:09:19.044 }, 00:09:19.044 { 00:09:19.044 "name": "BaseBdev3", 00:09:19.044 "uuid": "3e86b515-4cc2-4eb8-9cb0-b8ca79181d66", 00:09:19.044 "is_configured": true, 00:09:19.044 "data_offset": 0, 00:09:19.044 "data_size": 65536 00:09:19.044 } 00:09:19.044 ] 00:09:19.044 }' 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.044 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.303 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.303 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.303 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.303 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.303 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.563 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:19.563 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:19.563 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.563 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.563 [2024-12-07 16:35:18.226053] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.563 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.563 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.563 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.563 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.563 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.563 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.563 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.563 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.563 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.564 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.564 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.564 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.564 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.564 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.564 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.564 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.564 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.564 "name": "Existed_Raid", 00:09:19.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.564 "strip_size_kb": 0, 00:09:19.564 "state": "configuring", 00:09:19.564 "raid_level": "raid1", 00:09:19.564 "superblock": false, 00:09:19.564 "num_base_bdevs": 3, 00:09:19.564 "num_base_bdevs_discovered": 1, 00:09:19.564 "num_base_bdevs_operational": 3, 00:09:19.564 "base_bdevs_list": [ 00:09:19.564 { 00:09:19.564 "name": null, 00:09:19.564 "uuid": "97e785c9-9b4b-4a74-8f6b-e50e6214bf8c", 00:09:19.564 "is_configured": false, 00:09:19.564 "data_offset": 0, 00:09:19.564 "data_size": 65536 00:09:19.564 }, 00:09:19.564 { 00:09:19.564 "name": null, 00:09:19.564 "uuid": "71fdac74-7348-4406-b218-8106ffdecde9", 00:09:19.564 "is_configured": false, 00:09:19.564 "data_offset": 0, 00:09:19.564 "data_size": 65536 00:09:19.564 }, 00:09:19.564 { 00:09:19.564 "name": "BaseBdev3", 00:09:19.564 "uuid": "3e86b515-4cc2-4eb8-9cb0-b8ca79181d66", 00:09:19.564 "is_configured": true, 00:09:19.564 "data_offset": 0, 00:09:19.564 "data_size": 65536 00:09:19.564 } 00:09:19.564 ] 00:09:19.564 }' 00:09:19.564 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.564 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.824 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.824 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.824 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.824 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:19.824 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.084 [2024-12-07 16:35:18.729451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.084 "name": "Existed_Raid", 00:09:20.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.084 "strip_size_kb": 0, 00:09:20.084 "state": "configuring", 00:09:20.084 "raid_level": "raid1", 00:09:20.084 "superblock": false, 00:09:20.084 "num_base_bdevs": 3, 00:09:20.084 "num_base_bdevs_discovered": 2, 00:09:20.084 "num_base_bdevs_operational": 3, 00:09:20.084 "base_bdevs_list": [ 00:09:20.084 { 00:09:20.084 "name": null, 00:09:20.084 "uuid": "97e785c9-9b4b-4a74-8f6b-e50e6214bf8c", 00:09:20.084 "is_configured": false, 00:09:20.084 "data_offset": 0, 00:09:20.084 "data_size": 65536 00:09:20.084 }, 00:09:20.084 { 00:09:20.084 "name": "BaseBdev2", 00:09:20.084 "uuid": "71fdac74-7348-4406-b218-8106ffdecde9", 00:09:20.084 "is_configured": true, 00:09:20.084 "data_offset": 0, 00:09:20.084 "data_size": 65536 00:09:20.084 }, 00:09:20.084 { 00:09:20.084 "name": "BaseBdev3", 00:09:20.084 "uuid": "3e86b515-4cc2-4eb8-9cb0-b8ca79181d66", 00:09:20.084 "is_configured": true, 00:09:20.084 "data_offset": 0, 00:09:20.084 "data_size": 65536 00:09:20.084 } 00:09:20.084 ] 00:09:20.084 }' 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.084 16:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.343 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.343 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.343 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.343 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:20.343 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.343 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 97e785c9-9b4b-4a74-8f6b-e50e6214bf8c 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.603 [2024-12-07 16:35:19.305314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:20.603 [2024-12-07 16:35:19.305455] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:20.603 [2024-12-07 16:35:19.305481] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:20.603 [2024-12-07 16:35:19.305816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:20.603 [2024-12-07 16:35:19.306022] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:20.603 [2024-12-07 16:35:19.306068] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:20.603 [2024-12-07 16:35:19.306314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.603 NewBaseBdev 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.603 [ 00:09:20.603 { 00:09:20.603 "name": "NewBaseBdev", 00:09:20.603 "aliases": [ 00:09:20.603 "97e785c9-9b4b-4a74-8f6b-e50e6214bf8c" 00:09:20.603 ], 00:09:20.603 "product_name": "Malloc disk", 00:09:20.603 "block_size": 512, 00:09:20.603 "num_blocks": 65536, 00:09:20.603 "uuid": "97e785c9-9b4b-4a74-8f6b-e50e6214bf8c", 00:09:20.603 "assigned_rate_limits": { 00:09:20.603 "rw_ios_per_sec": 0, 00:09:20.603 "rw_mbytes_per_sec": 0, 00:09:20.603 "r_mbytes_per_sec": 0, 00:09:20.603 "w_mbytes_per_sec": 0 00:09:20.603 }, 00:09:20.603 "claimed": true, 00:09:20.603 "claim_type": "exclusive_write", 00:09:20.603 "zoned": false, 00:09:20.603 "supported_io_types": { 00:09:20.603 "read": true, 00:09:20.603 "write": true, 00:09:20.603 "unmap": true, 00:09:20.603 "flush": true, 00:09:20.603 "reset": true, 00:09:20.603 "nvme_admin": false, 00:09:20.603 "nvme_io": false, 00:09:20.603 "nvme_io_md": false, 00:09:20.603 "write_zeroes": true, 00:09:20.603 "zcopy": true, 00:09:20.603 "get_zone_info": false, 00:09:20.603 "zone_management": false, 00:09:20.603 "zone_append": false, 00:09:20.603 "compare": false, 00:09:20.603 "compare_and_write": false, 00:09:20.603 "abort": true, 00:09:20.603 "seek_hole": false, 00:09:20.603 "seek_data": false, 00:09:20.603 "copy": true, 00:09:20.603 "nvme_iov_md": false 00:09:20.603 }, 00:09:20.603 "memory_domains": [ 00:09:20.603 { 00:09:20.603 "dma_device_id": "system", 00:09:20.603 "dma_device_type": 1 00:09:20.603 }, 00:09:20.603 { 00:09:20.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.603 "dma_device_type": 2 00:09:20.603 } 00:09:20.603 ], 00:09:20.603 "driver_specific": {} 00:09:20.603 } 00:09:20.603 ] 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.603 "name": "Existed_Raid", 00:09:20.603 "uuid": "6029a95c-2a1c-44e1-9a50-8edcf0378cc8", 00:09:20.603 "strip_size_kb": 0, 00:09:20.603 "state": "online", 00:09:20.603 "raid_level": "raid1", 00:09:20.603 "superblock": false, 00:09:20.603 "num_base_bdevs": 3, 00:09:20.603 "num_base_bdevs_discovered": 3, 00:09:20.603 "num_base_bdevs_operational": 3, 00:09:20.603 "base_bdevs_list": [ 00:09:20.603 { 00:09:20.603 "name": "NewBaseBdev", 00:09:20.603 "uuid": "97e785c9-9b4b-4a74-8f6b-e50e6214bf8c", 00:09:20.603 "is_configured": true, 00:09:20.603 "data_offset": 0, 00:09:20.603 "data_size": 65536 00:09:20.603 }, 00:09:20.603 { 00:09:20.603 "name": "BaseBdev2", 00:09:20.603 "uuid": "71fdac74-7348-4406-b218-8106ffdecde9", 00:09:20.603 "is_configured": true, 00:09:20.603 "data_offset": 0, 00:09:20.603 "data_size": 65536 00:09:20.603 }, 00:09:20.603 { 00:09:20.603 "name": "BaseBdev3", 00:09:20.603 "uuid": "3e86b515-4cc2-4eb8-9cb0-b8ca79181d66", 00:09:20.603 "is_configured": true, 00:09:20.603 "data_offset": 0, 00:09:20.603 "data_size": 65536 00:09:20.603 } 00:09:20.603 ] 00:09:20.603 }' 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.603 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.173 [2024-12-07 16:35:19.784867] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.173 "name": "Existed_Raid", 00:09:21.173 "aliases": [ 00:09:21.173 "6029a95c-2a1c-44e1-9a50-8edcf0378cc8" 00:09:21.173 ], 00:09:21.173 "product_name": "Raid Volume", 00:09:21.173 "block_size": 512, 00:09:21.173 "num_blocks": 65536, 00:09:21.173 "uuid": "6029a95c-2a1c-44e1-9a50-8edcf0378cc8", 00:09:21.173 "assigned_rate_limits": { 00:09:21.173 "rw_ios_per_sec": 0, 00:09:21.173 "rw_mbytes_per_sec": 0, 00:09:21.173 "r_mbytes_per_sec": 0, 00:09:21.173 "w_mbytes_per_sec": 0 00:09:21.173 }, 00:09:21.173 "claimed": false, 00:09:21.173 "zoned": false, 00:09:21.173 "supported_io_types": { 00:09:21.173 "read": true, 00:09:21.173 "write": true, 00:09:21.173 "unmap": false, 00:09:21.173 "flush": false, 00:09:21.173 "reset": true, 00:09:21.173 "nvme_admin": false, 00:09:21.173 "nvme_io": false, 00:09:21.173 "nvme_io_md": false, 00:09:21.173 "write_zeroes": true, 00:09:21.173 "zcopy": false, 00:09:21.173 "get_zone_info": false, 00:09:21.173 "zone_management": false, 00:09:21.173 "zone_append": false, 00:09:21.173 "compare": false, 00:09:21.173 "compare_and_write": false, 00:09:21.173 "abort": false, 00:09:21.173 "seek_hole": false, 00:09:21.173 "seek_data": false, 00:09:21.173 "copy": false, 00:09:21.173 "nvme_iov_md": false 00:09:21.173 }, 00:09:21.173 "memory_domains": [ 00:09:21.173 { 00:09:21.173 "dma_device_id": "system", 00:09:21.173 "dma_device_type": 1 00:09:21.173 }, 00:09:21.173 { 00:09:21.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.173 "dma_device_type": 2 00:09:21.173 }, 00:09:21.173 { 00:09:21.173 "dma_device_id": "system", 00:09:21.173 "dma_device_type": 1 00:09:21.173 }, 00:09:21.173 { 00:09:21.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.173 "dma_device_type": 2 00:09:21.173 }, 00:09:21.173 { 00:09:21.173 "dma_device_id": "system", 00:09:21.173 "dma_device_type": 1 00:09:21.173 }, 00:09:21.173 { 00:09:21.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.173 "dma_device_type": 2 00:09:21.173 } 00:09:21.173 ], 00:09:21.173 "driver_specific": { 00:09:21.173 "raid": { 00:09:21.173 "uuid": "6029a95c-2a1c-44e1-9a50-8edcf0378cc8", 00:09:21.173 "strip_size_kb": 0, 00:09:21.173 "state": "online", 00:09:21.173 "raid_level": "raid1", 00:09:21.173 "superblock": false, 00:09:21.173 "num_base_bdevs": 3, 00:09:21.173 "num_base_bdevs_discovered": 3, 00:09:21.173 "num_base_bdevs_operational": 3, 00:09:21.173 "base_bdevs_list": [ 00:09:21.173 { 00:09:21.173 "name": "NewBaseBdev", 00:09:21.173 "uuid": "97e785c9-9b4b-4a74-8f6b-e50e6214bf8c", 00:09:21.173 "is_configured": true, 00:09:21.173 "data_offset": 0, 00:09:21.173 "data_size": 65536 00:09:21.173 }, 00:09:21.173 { 00:09:21.173 "name": "BaseBdev2", 00:09:21.173 "uuid": "71fdac74-7348-4406-b218-8106ffdecde9", 00:09:21.173 "is_configured": true, 00:09:21.173 "data_offset": 0, 00:09:21.173 "data_size": 65536 00:09:21.173 }, 00:09:21.173 { 00:09:21.173 "name": "BaseBdev3", 00:09:21.173 "uuid": "3e86b515-4cc2-4eb8-9cb0-b8ca79181d66", 00:09:21.173 "is_configured": true, 00:09:21.173 "data_offset": 0, 00:09:21.173 "data_size": 65536 00:09:21.173 } 00:09:21.173 ] 00:09:21.173 } 00:09:21.173 } 00:09:21.173 }' 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:21.173 BaseBdev2 00:09:21.173 BaseBdev3' 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.173 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.173 [2024-12-07 16:35:20.040082] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.173 [2024-12-07 16:35:20.040114] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.173 [2024-12-07 16:35:20.040193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.173 [2024-12-07 16:35:20.040489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.173 [2024-12-07 16:35:20.040503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78736 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78736 ']' 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78736 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.173 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78736 00:09:21.433 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.433 killing process with pid 78736 00:09:21.433 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.433 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78736' 00:09:21.433 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78736 00:09:21.433 [2024-12-07 16:35:20.090426] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.433 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78736 00:09:21.433 [2024-12-07 16:35:20.147108] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.692 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:21.692 ************************************ 00:09:21.692 END TEST raid_state_function_test 00:09:21.692 ************************************ 00:09:21.692 00:09:21.692 real 0m9.127s 00:09:21.692 user 0m15.203s 00:09:21.692 sys 0m2.015s 00:09:21.692 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.692 16:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.692 16:35:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:21.692 16:35:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:21.692 16:35:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.692 16:35:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.951 ************************************ 00:09:21.951 START TEST raid_state_function_test_sb 00:09:21.951 ************************************ 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79341 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79341' 00:09:21.951 Process raid pid: 79341 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79341 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79341 ']' 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.951 16:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.951 [2024-12-07 16:35:20.694054] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:21.951 [2024-12-07 16:35:20.694280] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.211 [2024-12-07 16:35:20.859427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.211 [2024-12-07 16:35:20.930542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.211 [2024-12-07 16:35:21.007598] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.211 [2024-12-07 16:35:21.007727] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.780 [2024-12-07 16:35:21.527513] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.780 [2024-12-07 16:35:21.527568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.780 [2024-12-07 16:35:21.527582] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.780 [2024-12-07 16:35:21.527594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.780 [2024-12-07 16:35:21.527600] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.780 [2024-12-07 16:35:21.527614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.780 "name": "Existed_Raid", 00:09:22.780 "uuid": "64daf102-49c3-4add-86ca-c622ddebff43", 00:09:22.780 "strip_size_kb": 0, 00:09:22.780 "state": "configuring", 00:09:22.780 "raid_level": "raid1", 00:09:22.780 "superblock": true, 00:09:22.780 "num_base_bdevs": 3, 00:09:22.780 "num_base_bdevs_discovered": 0, 00:09:22.780 "num_base_bdevs_operational": 3, 00:09:22.780 "base_bdevs_list": [ 00:09:22.780 { 00:09:22.780 "name": "BaseBdev1", 00:09:22.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.780 "is_configured": false, 00:09:22.780 "data_offset": 0, 00:09:22.780 "data_size": 0 00:09:22.780 }, 00:09:22.780 { 00:09:22.780 "name": "BaseBdev2", 00:09:22.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.780 "is_configured": false, 00:09:22.780 "data_offset": 0, 00:09:22.780 "data_size": 0 00:09:22.780 }, 00:09:22.780 { 00:09:22.780 "name": "BaseBdev3", 00:09:22.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.780 "is_configured": false, 00:09:22.780 "data_offset": 0, 00:09:22.780 "data_size": 0 00:09:22.780 } 00:09:22.780 ] 00:09:22.780 }' 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.780 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.347 [2024-12-07 16:35:21.942738] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.347 [2024-12-07 16:35:21.942855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.347 [2024-12-07 16:35:21.950737] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.347 [2024-12-07 16:35:21.950813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.347 [2024-12-07 16:35:21.950839] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.347 [2024-12-07 16:35:21.950862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.347 [2024-12-07 16:35:21.950879] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.347 [2024-12-07 16:35:21.950909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.347 [2024-12-07 16:35:21.977684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.347 BaseBdev1 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.347 16:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.347 [ 00:09:23.347 { 00:09:23.347 "name": "BaseBdev1", 00:09:23.347 "aliases": [ 00:09:23.347 "70026378-ef0c-4448-8bad-8abb825b73fe" 00:09:23.347 ], 00:09:23.347 "product_name": "Malloc disk", 00:09:23.347 "block_size": 512, 00:09:23.347 "num_blocks": 65536, 00:09:23.347 "uuid": "70026378-ef0c-4448-8bad-8abb825b73fe", 00:09:23.347 "assigned_rate_limits": { 00:09:23.347 "rw_ios_per_sec": 0, 00:09:23.347 "rw_mbytes_per_sec": 0, 00:09:23.347 "r_mbytes_per_sec": 0, 00:09:23.347 "w_mbytes_per_sec": 0 00:09:23.347 }, 00:09:23.347 "claimed": true, 00:09:23.347 "claim_type": "exclusive_write", 00:09:23.347 "zoned": false, 00:09:23.347 "supported_io_types": { 00:09:23.347 "read": true, 00:09:23.347 "write": true, 00:09:23.347 "unmap": true, 00:09:23.347 "flush": true, 00:09:23.347 "reset": true, 00:09:23.347 "nvme_admin": false, 00:09:23.347 "nvme_io": false, 00:09:23.347 "nvme_io_md": false, 00:09:23.347 "write_zeroes": true, 00:09:23.347 "zcopy": true, 00:09:23.347 "get_zone_info": false, 00:09:23.347 "zone_management": false, 00:09:23.347 "zone_append": false, 00:09:23.347 "compare": false, 00:09:23.347 "compare_and_write": false, 00:09:23.347 "abort": true, 00:09:23.347 "seek_hole": false, 00:09:23.348 "seek_data": false, 00:09:23.348 "copy": true, 00:09:23.348 "nvme_iov_md": false 00:09:23.348 }, 00:09:23.348 "memory_domains": [ 00:09:23.348 { 00:09:23.348 "dma_device_id": "system", 00:09:23.348 "dma_device_type": 1 00:09:23.348 }, 00:09:23.348 { 00:09:23.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.348 "dma_device_type": 2 00:09:23.348 } 00:09:23.348 ], 00:09:23.348 "driver_specific": {} 00:09:23.348 } 00:09:23.348 ] 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.348 "name": "Existed_Raid", 00:09:23.348 "uuid": "f1d4505e-bee5-4b16-8b53-4757f9c1aeb0", 00:09:23.348 "strip_size_kb": 0, 00:09:23.348 "state": "configuring", 00:09:23.348 "raid_level": "raid1", 00:09:23.348 "superblock": true, 00:09:23.348 "num_base_bdevs": 3, 00:09:23.348 "num_base_bdevs_discovered": 1, 00:09:23.348 "num_base_bdevs_operational": 3, 00:09:23.348 "base_bdevs_list": [ 00:09:23.348 { 00:09:23.348 "name": "BaseBdev1", 00:09:23.348 "uuid": "70026378-ef0c-4448-8bad-8abb825b73fe", 00:09:23.348 "is_configured": true, 00:09:23.348 "data_offset": 2048, 00:09:23.348 "data_size": 63488 00:09:23.348 }, 00:09:23.348 { 00:09:23.348 "name": "BaseBdev2", 00:09:23.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.348 "is_configured": false, 00:09:23.348 "data_offset": 0, 00:09:23.348 "data_size": 0 00:09:23.348 }, 00:09:23.348 { 00:09:23.348 "name": "BaseBdev3", 00:09:23.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.348 "is_configured": false, 00:09:23.348 "data_offset": 0, 00:09:23.348 "data_size": 0 00:09:23.348 } 00:09:23.348 ] 00:09:23.348 }' 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.348 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.607 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.607 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.607 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.607 [2024-12-07 16:35:22.456939] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.607 [2024-12-07 16:35:22.457065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.608 [2024-12-07 16:35:22.468969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.608 [2024-12-07 16:35:22.471142] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.608 [2024-12-07 16:35:22.471188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.608 [2024-12-07 16:35:22.471198] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.608 [2024-12-07 16:35:22.471208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.608 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.867 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.867 "name": "Existed_Raid", 00:09:23.867 "uuid": "084e192a-6a7d-499a-b664-3a4c0cb2c9f9", 00:09:23.867 "strip_size_kb": 0, 00:09:23.867 "state": "configuring", 00:09:23.867 "raid_level": "raid1", 00:09:23.867 "superblock": true, 00:09:23.867 "num_base_bdevs": 3, 00:09:23.867 "num_base_bdevs_discovered": 1, 00:09:23.867 "num_base_bdevs_operational": 3, 00:09:23.867 "base_bdevs_list": [ 00:09:23.867 { 00:09:23.867 "name": "BaseBdev1", 00:09:23.867 "uuid": "70026378-ef0c-4448-8bad-8abb825b73fe", 00:09:23.867 "is_configured": true, 00:09:23.867 "data_offset": 2048, 00:09:23.867 "data_size": 63488 00:09:23.867 }, 00:09:23.867 { 00:09:23.867 "name": "BaseBdev2", 00:09:23.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.867 "is_configured": false, 00:09:23.867 "data_offset": 0, 00:09:23.867 "data_size": 0 00:09:23.867 }, 00:09:23.867 { 00:09:23.867 "name": "BaseBdev3", 00:09:23.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.867 "is_configured": false, 00:09:23.867 "data_offset": 0, 00:09:23.867 "data_size": 0 00:09:23.867 } 00:09:23.867 ] 00:09:23.867 }' 00:09:23.867 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.867 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.126 [2024-12-07 16:35:22.949397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.126 BaseBdev2 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.126 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.127 [ 00:09:24.127 { 00:09:24.127 "name": "BaseBdev2", 00:09:24.127 "aliases": [ 00:09:24.127 "630528ef-3e43-44db-8a8c-4863a1d48839" 00:09:24.127 ], 00:09:24.127 "product_name": "Malloc disk", 00:09:24.127 "block_size": 512, 00:09:24.127 "num_blocks": 65536, 00:09:24.127 "uuid": "630528ef-3e43-44db-8a8c-4863a1d48839", 00:09:24.127 "assigned_rate_limits": { 00:09:24.127 "rw_ios_per_sec": 0, 00:09:24.127 "rw_mbytes_per_sec": 0, 00:09:24.127 "r_mbytes_per_sec": 0, 00:09:24.127 "w_mbytes_per_sec": 0 00:09:24.127 }, 00:09:24.127 "claimed": true, 00:09:24.127 "claim_type": "exclusive_write", 00:09:24.127 "zoned": false, 00:09:24.127 "supported_io_types": { 00:09:24.127 "read": true, 00:09:24.127 "write": true, 00:09:24.127 "unmap": true, 00:09:24.127 "flush": true, 00:09:24.127 "reset": true, 00:09:24.127 "nvme_admin": false, 00:09:24.127 "nvme_io": false, 00:09:24.127 "nvme_io_md": false, 00:09:24.127 "write_zeroes": true, 00:09:24.127 "zcopy": true, 00:09:24.127 "get_zone_info": false, 00:09:24.127 "zone_management": false, 00:09:24.127 "zone_append": false, 00:09:24.127 "compare": false, 00:09:24.127 "compare_and_write": false, 00:09:24.127 "abort": true, 00:09:24.127 "seek_hole": false, 00:09:24.127 "seek_data": false, 00:09:24.127 "copy": true, 00:09:24.127 "nvme_iov_md": false 00:09:24.127 }, 00:09:24.127 "memory_domains": [ 00:09:24.127 { 00:09:24.127 "dma_device_id": "system", 00:09:24.127 "dma_device_type": 1 00:09:24.127 }, 00:09:24.127 { 00:09:24.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.127 "dma_device_type": 2 00:09:24.127 } 00:09:24.127 ], 00:09:24.127 "driver_specific": {} 00:09:24.127 } 00:09:24.127 ] 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.127 16:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.127 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.427 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.427 "name": "Existed_Raid", 00:09:24.427 "uuid": "084e192a-6a7d-499a-b664-3a4c0cb2c9f9", 00:09:24.427 "strip_size_kb": 0, 00:09:24.427 "state": "configuring", 00:09:24.427 "raid_level": "raid1", 00:09:24.427 "superblock": true, 00:09:24.427 "num_base_bdevs": 3, 00:09:24.427 "num_base_bdevs_discovered": 2, 00:09:24.427 "num_base_bdevs_operational": 3, 00:09:24.427 "base_bdevs_list": [ 00:09:24.427 { 00:09:24.427 "name": "BaseBdev1", 00:09:24.427 "uuid": "70026378-ef0c-4448-8bad-8abb825b73fe", 00:09:24.427 "is_configured": true, 00:09:24.427 "data_offset": 2048, 00:09:24.427 "data_size": 63488 00:09:24.427 }, 00:09:24.427 { 00:09:24.427 "name": "BaseBdev2", 00:09:24.427 "uuid": "630528ef-3e43-44db-8a8c-4863a1d48839", 00:09:24.427 "is_configured": true, 00:09:24.427 "data_offset": 2048, 00:09:24.427 "data_size": 63488 00:09:24.427 }, 00:09:24.427 { 00:09:24.427 "name": "BaseBdev3", 00:09:24.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.427 "is_configured": false, 00:09:24.427 "data_offset": 0, 00:09:24.427 "data_size": 0 00:09:24.427 } 00:09:24.427 ] 00:09:24.427 }' 00:09:24.427 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.427 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.690 [2024-12-07 16:35:23.433536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.690 [2024-12-07 16:35:23.433809] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:24.690 [2024-12-07 16:35:23.433831] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:24.690 [2024-12-07 16:35:23.434168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:24.690 BaseBdev3 00:09:24.690 [2024-12-07 16:35:23.434313] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:24.690 [2024-12-07 16:35:23.434324] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:24.690 [2024-12-07 16:35:23.434458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.690 [ 00:09:24.690 { 00:09:24.690 "name": "BaseBdev3", 00:09:24.690 "aliases": [ 00:09:24.690 "7db99e65-a558-4674-b5af-3ddabd7a9efa" 00:09:24.690 ], 00:09:24.690 "product_name": "Malloc disk", 00:09:24.690 "block_size": 512, 00:09:24.690 "num_blocks": 65536, 00:09:24.690 "uuid": "7db99e65-a558-4674-b5af-3ddabd7a9efa", 00:09:24.690 "assigned_rate_limits": { 00:09:24.690 "rw_ios_per_sec": 0, 00:09:24.690 "rw_mbytes_per_sec": 0, 00:09:24.690 "r_mbytes_per_sec": 0, 00:09:24.690 "w_mbytes_per_sec": 0 00:09:24.690 }, 00:09:24.690 "claimed": true, 00:09:24.690 "claim_type": "exclusive_write", 00:09:24.690 "zoned": false, 00:09:24.690 "supported_io_types": { 00:09:24.690 "read": true, 00:09:24.690 "write": true, 00:09:24.690 "unmap": true, 00:09:24.690 "flush": true, 00:09:24.690 "reset": true, 00:09:24.690 "nvme_admin": false, 00:09:24.690 "nvme_io": false, 00:09:24.690 "nvme_io_md": false, 00:09:24.690 "write_zeroes": true, 00:09:24.690 "zcopy": true, 00:09:24.690 "get_zone_info": false, 00:09:24.690 "zone_management": false, 00:09:24.690 "zone_append": false, 00:09:24.690 "compare": false, 00:09:24.690 "compare_and_write": false, 00:09:24.690 "abort": true, 00:09:24.690 "seek_hole": false, 00:09:24.690 "seek_data": false, 00:09:24.690 "copy": true, 00:09:24.690 "nvme_iov_md": false 00:09:24.690 }, 00:09:24.690 "memory_domains": [ 00:09:24.690 { 00:09:24.690 "dma_device_id": "system", 00:09:24.690 "dma_device_type": 1 00:09:24.690 }, 00:09:24.690 { 00:09:24.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.690 "dma_device_type": 2 00:09:24.690 } 00:09:24.690 ], 00:09:24.690 "driver_specific": {} 00:09:24.690 } 00:09:24.690 ] 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.690 "name": "Existed_Raid", 00:09:24.690 "uuid": "084e192a-6a7d-499a-b664-3a4c0cb2c9f9", 00:09:24.690 "strip_size_kb": 0, 00:09:24.690 "state": "online", 00:09:24.690 "raid_level": "raid1", 00:09:24.690 "superblock": true, 00:09:24.690 "num_base_bdevs": 3, 00:09:24.690 "num_base_bdevs_discovered": 3, 00:09:24.690 "num_base_bdevs_operational": 3, 00:09:24.690 "base_bdevs_list": [ 00:09:24.690 { 00:09:24.690 "name": "BaseBdev1", 00:09:24.690 "uuid": "70026378-ef0c-4448-8bad-8abb825b73fe", 00:09:24.690 "is_configured": true, 00:09:24.690 "data_offset": 2048, 00:09:24.690 "data_size": 63488 00:09:24.690 }, 00:09:24.690 { 00:09:24.690 "name": "BaseBdev2", 00:09:24.690 "uuid": "630528ef-3e43-44db-8a8c-4863a1d48839", 00:09:24.690 "is_configured": true, 00:09:24.690 "data_offset": 2048, 00:09:24.690 "data_size": 63488 00:09:24.690 }, 00:09:24.690 { 00:09:24.690 "name": "BaseBdev3", 00:09:24.690 "uuid": "7db99e65-a558-4674-b5af-3ddabd7a9efa", 00:09:24.690 "is_configured": true, 00:09:24.690 "data_offset": 2048, 00:09:24.690 "data_size": 63488 00:09:24.690 } 00:09:24.690 ] 00:09:24.690 }' 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.690 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.257 [2024-12-07 16:35:23.909072] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.257 "name": "Existed_Raid", 00:09:25.257 "aliases": [ 00:09:25.257 "084e192a-6a7d-499a-b664-3a4c0cb2c9f9" 00:09:25.257 ], 00:09:25.257 "product_name": "Raid Volume", 00:09:25.257 "block_size": 512, 00:09:25.257 "num_blocks": 63488, 00:09:25.257 "uuid": "084e192a-6a7d-499a-b664-3a4c0cb2c9f9", 00:09:25.257 "assigned_rate_limits": { 00:09:25.257 "rw_ios_per_sec": 0, 00:09:25.257 "rw_mbytes_per_sec": 0, 00:09:25.257 "r_mbytes_per_sec": 0, 00:09:25.257 "w_mbytes_per_sec": 0 00:09:25.257 }, 00:09:25.257 "claimed": false, 00:09:25.257 "zoned": false, 00:09:25.257 "supported_io_types": { 00:09:25.257 "read": true, 00:09:25.257 "write": true, 00:09:25.257 "unmap": false, 00:09:25.257 "flush": false, 00:09:25.257 "reset": true, 00:09:25.257 "nvme_admin": false, 00:09:25.257 "nvme_io": false, 00:09:25.257 "nvme_io_md": false, 00:09:25.257 "write_zeroes": true, 00:09:25.257 "zcopy": false, 00:09:25.257 "get_zone_info": false, 00:09:25.257 "zone_management": false, 00:09:25.257 "zone_append": false, 00:09:25.257 "compare": false, 00:09:25.257 "compare_and_write": false, 00:09:25.257 "abort": false, 00:09:25.257 "seek_hole": false, 00:09:25.257 "seek_data": false, 00:09:25.257 "copy": false, 00:09:25.257 "nvme_iov_md": false 00:09:25.257 }, 00:09:25.257 "memory_domains": [ 00:09:25.257 { 00:09:25.257 "dma_device_id": "system", 00:09:25.257 "dma_device_type": 1 00:09:25.257 }, 00:09:25.257 { 00:09:25.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.257 "dma_device_type": 2 00:09:25.257 }, 00:09:25.257 { 00:09:25.257 "dma_device_id": "system", 00:09:25.257 "dma_device_type": 1 00:09:25.257 }, 00:09:25.257 { 00:09:25.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.257 "dma_device_type": 2 00:09:25.257 }, 00:09:25.257 { 00:09:25.257 "dma_device_id": "system", 00:09:25.257 "dma_device_type": 1 00:09:25.257 }, 00:09:25.257 { 00:09:25.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.257 "dma_device_type": 2 00:09:25.257 } 00:09:25.257 ], 00:09:25.257 "driver_specific": { 00:09:25.257 "raid": { 00:09:25.257 "uuid": "084e192a-6a7d-499a-b664-3a4c0cb2c9f9", 00:09:25.257 "strip_size_kb": 0, 00:09:25.257 "state": "online", 00:09:25.257 "raid_level": "raid1", 00:09:25.257 "superblock": true, 00:09:25.257 "num_base_bdevs": 3, 00:09:25.257 "num_base_bdevs_discovered": 3, 00:09:25.257 "num_base_bdevs_operational": 3, 00:09:25.257 "base_bdevs_list": [ 00:09:25.257 { 00:09:25.257 "name": "BaseBdev1", 00:09:25.257 "uuid": "70026378-ef0c-4448-8bad-8abb825b73fe", 00:09:25.257 "is_configured": true, 00:09:25.257 "data_offset": 2048, 00:09:25.257 "data_size": 63488 00:09:25.257 }, 00:09:25.257 { 00:09:25.257 "name": "BaseBdev2", 00:09:25.257 "uuid": "630528ef-3e43-44db-8a8c-4863a1d48839", 00:09:25.257 "is_configured": true, 00:09:25.257 "data_offset": 2048, 00:09:25.257 "data_size": 63488 00:09:25.257 }, 00:09:25.257 { 00:09:25.257 "name": "BaseBdev3", 00:09:25.257 "uuid": "7db99e65-a558-4674-b5af-3ddabd7a9efa", 00:09:25.257 "is_configured": true, 00:09:25.257 "data_offset": 2048, 00:09:25.257 "data_size": 63488 00:09:25.257 } 00:09:25.257 ] 00:09:25.257 } 00:09:25.257 } 00:09:25.257 }' 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:25.257 BaseBdev2 00:09:25.257 BaseBdev3' 00:09:25.257 16:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.257 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.258 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.258 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.258 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:25.258 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.258 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.258 [2024-12-07 16:35:24.148397] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.515 "name": "Existed_Raid", 00:09:25.515 "uuid": "084e192a-6a7d-499a-b664-3a4c0cb2c9f9", 00:09:25.515 "strip_size_kb": 0, 00:09:25.515 "state": "online", 00:09:25.515 "raid_level": "raid1", 00:09:25.515 "superblock": true, 00:09:25.515 "num_base_bdevs": 3, 00:09:25.515 "num_base_bdevs_discovered": 2, 00:09:25.515 "num_base_bdevs_operational": 2, 00:09:25.515 "base_bdevs_list": [ 00:09:25.515 { 00:09:25.515 "name": null, 00:09:25.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.515 "is_configured": false, 00:09:25.515 "data_offset": 0, 00:09:25.515 "data_size": 63488 00:09:25.515 }, 00:09:25.515 { 00:09:25.515 "name": "BaseBdev2", 00:09:25.515 "uuid": "630528ef-3e43-44db-8a8c-4863a1d48839", 00:09:25.515 "is_configured": true, 00:09:25.515 "data_offset": 2048, 00:09:25.515 "data_size": 63488 00:09:25.515 }, 00:09:25.515 { 00:09:25.515 "name": "BaseBdev3", 00:09:25.515 "uuid": "7db99e65-a558-4674-b5af-3ddabd7a9efa", 00:09:25.515 "is_configured": true, 00:09:25.515 "data_offset": 2048, 00:09:25.515 "data_size": 63488 00:09:25.515 } 00:09:25.515 ] 00:09:25.515 }' 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.515 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.773 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:25.773 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.773 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.773 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.773 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.773 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:25.773 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.773 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:25.773 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.774 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:25.774 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.774 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.774 [2024-12-07 16:35:24.652559] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.032 [2024-12-07 16:35:24.712895] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:26.032 [2024-12-07 16:35:24.713076] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.032 [2024-12-07 16:35:24.734370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.032 [2024-12-07 16:35:24.734538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.032 [2024-12-07 16:35:24.734603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.032 BaseBdev2 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.032 [ 00:09:26.032 { 00:09:26.032 "name": "BaseBdev2", 00:09:26.032 "aliases": [ 00:09:26.032 "135ad59c-4f90-4450-885f-1d8f01f8a2b5" 00:09:26.032 ], 00:09:26.032 "product_name": "Malloc disk", 00:09:26.032 "block_size": 512, 00:09:26.032 "num_blocks": 65536, 00:09:26.032 "uuid": "135ad59c-4f90-4450-885f-1d8f01f8a2b5", 00:09:26.032 "assigned_rate_limits": { 00:09:26.032 "rw_ios_per_sec": 0, 00:09:26.032 "rw_mbytes_per_sec": 0, 00:09:26.032 "r_mbytes_per_sec": 0, 00:09:26.032 "w_mbytes_per_sec": 0 00:09:26.032 }, 00:09:26.032 "claimed": false, 00:09:26.032 "zoned": false, 00:09:26.032 "supported_io_types": { 00:09:26.032 "read": true, 00:09:26.032 "write": true, 00:09:26.032 "unmap": true, 00:09:26.032 "flush": true, 00:09:26.032 "reset": true, 00:09:26.032 "nvme_admin": false, 00:09:26.032 "nvme_io": false, 00:09:26.032 "nvme_io_md": false, 00:09:26.032 "write_zeroes": true, 00:09:26.032 "zcopy": true, 00:09:26.032 "get_zone_info": false, 00:09:26.032 "zone_management": false, 00:09:26.032 "zone_append": false, 00:09:26.032 "compare": false, 00:09:26.032 "compare_and_write": false, 00:09:26.032 "abort": true, 00:09:26.032 "seek_hole": false, 00:09:26.032 "seek_data": false, 00:09:26.032 "copy": true, 00:09:26.032 "nvme_iov_md": false 00:09:26.032 }, 00:09:26.032 "memory_domains": [ 00:09:26.032 { 00:09:26.032 "dma_device_id": "system", 00:09:26.032 "dma_device_type": 1 00:09:26.032 }, 00:09:26.032 { 00:09:26.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.032 "dma_device_type": 2 00:09:26.032 } 00:09:26.032 ], 00:09:26.032 "driver_specific": {} 00:09:26.032 } 00:09:26.032 ] 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.032 BaseBdev3 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.032 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.032 [ 00:09:26.032 { 00:09:26.032 "name": "BaseBdev3", 00:09:26.032 "aliases": [ 00:09:26.032 "ce81cf31-b4b8-44e6-bda2-75b0d5b58975" 00:09:26.032 ], 00:09:26.032 "product_name": "Malloc disk", 00:09:26.032 "block_size": 512, 00:09:26.032 "num_blocks": 65536, 00:09:26.032 "uuid": "ce81cf31-b4b8-44e6-bda2-75b0d5b58975", 00:09:26.032 "assigned_rate_limits": { 00:09:26.032 "rw_ios_per_sec": 0, 00:09:26.032 "rw_mbytes_per_sec": 0, 00:09:26.032 "r_mbytes_per_sec": 0, 00:09:26.032 "w_mbytes_per_sec": 0 00:09:26.032 }, 00:09:26.032 "claimed": false, 00:09:26.032 "zoned": false, 00:09:26.032 "supported_io_types": { 00:09:26.032 "read": true, 00:09:26.032 "write": true, 00:09:26.032 "unmap": true, 00:09:26.032 "flush": true, 00:09:26.032 "reset": true, 00:09:26.032 "nvme_admin": false, 00:09:26.032 "nvme_io": false, 00:09:26.032 "nvme_io_md": false, 00:09:26.033 "write_zeroes": true, 00:09:26.033 "zcopy": true, 00:09:26.033 "get_zone_info": false, 00:09:26.033 "zone_management": false, 00:09:26.033 "zone_append": false, 00:09:26.033 "compare": false, 00:09:26.033 "compare_and_write": false, 00:09:26.033 "abort": true, 00:09:26.033 "seek_hole": false, 00:09:26.033 "seek_data": false, 00:09:26.033 "copy": true, 00:09:26.033 "nvme_iov_md": false 00:09:26.033 }, 00:09:26.033 "memory_domains": [ 00:09:26.033 { 00:09:26.033 "dma_device_id": "system", 00:09:26.033 "dma_device_type": 1 00:09:26.033 }, 00:09:26.033 { 00:09:26.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.033 "dma_device_type": 2 00:09:26.033 } 00:09:26.033 ], 00:09:26.033 "driver_specific": {} 00:09:26.033 } 00:09:26.033 ] 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.033 [2024-12-07 16:35:24.915234] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.033 [2024-12-07 16:35:24.915375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.033 [2024-12-07 16:35:24.915413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.033 [2024-12-07 16:35:24.917734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.033 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.292 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.292 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.292 "name": "Existed_Raid", 00:09:26.292 "uuid": "24a83e4c-cfa2-4d42-a30d-55653f83b2fa", 00:09:26.292 "strip_size_kb": 0, 00:09:26.292 "state": "configuring", 00:09:26.292 "raid_level": "raid1", 00:09:26.292 "superblock": true, 00:09:26.292 "num_base_bdevs": 3, 00:09:26.292 "num_base_bdevs_discovered": 2, 00:09:26.292 "num_base_bdevs_operational": 3, 00:09:26.292 "base_bdevs_list": [ 00:09:26.292 { 00:09:26.292 "name": "BaseBdev1", 00:09:26.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.292 "is_configured": false, 00:09:26.292 "data_offset": 0, 00:09:26.292 "data_size": 0 00:09:26.292 }, 00:09:26.292 { 00:09:26.292 "name": "BaseBdev2", 00:09:26.292 "uuid": "135ad59c-4f90-4450-885f-1d8f01f8a2b5", 00:09:26.292 "is_configured": true, 00:09:26.292 "data_offset": 2048, 00:09:26.292 "data_size": 63488 00:09:26.292 }, 00:09:26.292 { 00:09:26.292 "name": "BaseBdev3", 00:09:26.292 "uuid": "ce81cf31-b4b8-44e6-bda2-75b0d5b58975", 00:09:26.292 "is_configured": true, 00:09:26.292 "data_offset": 2048, 00:09:26.292 "data_size": 63488 00:09:26.292 } 00:09:26.292 ] 00:09:26.292 }' 00:09:26.292 16:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.292 16:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.551 [2024-12-07 16:35:25.350435] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.551 "name": "Existed_Raid", 00:09:26.551 "uuid": "24a83e4c-cfa2-4d42-a30d-55653f83b2fa", 00:09:26.551 "strip_size_kb": 0, 00:09:26.551 "state": "configuring", 00:09:26.551 "raid_level": "raid1", 00:09:26.551 "superblock": true, 00:09:26.551 "num_base_bdevs": 3, 00:09:26.551 "num_base_bdevs_discovered": 1, 00:09:26.551 "num_base_bdevs_operational": 3, 00:09:26.551 "base_bdevs_list": [ 00:09:26.551 { 00:09:26.551 "name": "BaseBdev1", 00:09:26.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.551 "is_configured": false, 00:09:26.551 "data_offset": 0, 00:09:26.551 "data_size": 0 00:09:26.551 }, 00:09:26.551 { 00:09:26.551 "name": null, 00:09:26.551 "uuid": "135ad59c-4f90-4450-885f-1d8f01f8a2b5", 00:09:26.551 "is_configured": false, 00:09:26.551 "data_offset": 0, 00:09:26.551 "data_size": 63488 00:09:26.551 }, 00:09:26.551 { 00:09:26.551 "name": "BaseBdev3", 00:09:26.551 "uuid": "ce81cf31-b4b8-44e6-bda2-75b0d5b58975", 00:09:26.551 "is_configured": true, 00:09:26.551 "data_offset": 2048, 00:09:26.551 "data_size": 63488 00:09:26.551 } 00:09:26.551 ] 00:09:26.551 }' 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.551 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.117 [2024-12-07 16:35:25.866334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.117 BaseBdev1 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.117 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.117 [ 00:09:27.117 { 00:09:27.117 "name": "BaseBdev1", 00:09:27.117 "aliases": [ 00:09:27.117 "426659cc-5cc8-43b0-8695-e80d2b7c4090" 00:09:27.117 ], 00:09:27.117 "product_name": "Malloc disk", 00:09:27.117 "block_size": 512, 00:09:27.117 "num_blocks": 65536, 00:09:27.117 "uuid": "426659cc-5cc8-43b0-8695-e80d2b7c4090", 00:09:27.117 "assigned_rate_limits": { 00:09:27.117 "rw_ios_per_sec": 0, 00:09:27.117 "rw_mbytes_per_sec": 0, 00:09:27.117 "r_mbytes_per_sec": 0, 00:09:27.117 "w_mbytes_per_sec": 0 00:09:27.117 }, 00:09:27.117 "claimed": true, 00:09:27.117 "claim_type": "exclusive_write", 00:09:27.118 "zoned": false, 00:09:27.118 "supported_io_types": { 00:09:27.118 "read": true, 00:09:27.118 "write": true, 00:09:27.118 "unmap": true, 00:09:27.118 "flush": true, 00:09:27.118 "reset": true, 00:09:27.118 "nvme_admin": false, 00:09:27.118 "nvme_io": false, 00:09:27.118 "nvme_io_md": false, 00:09:27.118 "write_zeroes": true, 00:09:27.118 "zcopy": true, 00:09:27.118 "get_zone_info": false, 00:09:27.118 "zone_management": false, 00:09:27.118 "zone_append": false, 00:09:27.118 "compare": false, 00:09:27.118 "compare_and_write": false, 00:09:27.118 "abort": true, 00:09:27.118 "seek_hole": false, 00:09:27.118 "seek_data": false, 00:09:27.118 "copy": true, 00:09:27.118 "nvme_iov_md": false 00:09:27.118 }, 00:09:27.118 "memory_domains": [ 00:09:27.118 { 00:09:27.118 "dma_device_id": "system", 00:09:27.118 "dma_device_type": 1 00:09:27.118 }, 00:09:27.118 { 00:09:27.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.118 "dma_device_type": 2 00:09:27.118 } 00:09:27.118 ], 00:09:27.118 "driver_specific": {} 00:09:27.118 } 00:09:27.118 ] 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.118 "name": "Existed_Raid", 00:09:27.118 "uuid": "24a83e4c-cfa2-4d42-a30d-55653f83b2fa", 00:09:27.118 "strip_size_kb": 0, 00:09:27.118 "state": "configuring", 00:09:27.118 "raid_level": "raid1", 00:09:27.118 "superblock": true, 00:09:27.118 "num_base_bdevs": 3, 00:09:27.118 "num_base_bdevs_discovered": 2, 00:09:27.118 "num_base_bdevs_operational": 3, 00:09:27.118 "base_bdevs_list": [ 00:09:27.118 { 00:09:27.118 "name": "BaseBdev1", 00:09:27.118 "uuid": "426659cc-5cc8-43b0-8695-e80d2b7c4090", 00:09:27.118 "is_configured": true, 00:09:27.118 "data_offset": 2048, 00:09:27.118 "data_size": 63488 00:09:27.118 }, 00:09:27.118 { 00:09:27.118 "name": null, 00:09:27.118 "uuid": "135ad59c-4f90-4450-885f-1d8f01f8a2b5", 00:09:27.118 "is_configured": false, 00:09:27.118 "data_offset": 0, 00:09:27.118 "data_size": 63488 00:09:27.118 }, 00:09:27.118 { 00:09:27.118 "name": "BaseBdev3", 00:09:27.118 "uuid": "ce81cf31-b4b8-44e6-bda2-75b0d5b58975", 00:09:27.118 "is_configured": true, 00:09:27.118 "data_offset": 2048, 00:09:27.118 "data_size": 63488 00:09:27.118 } 00:09:27.118 ] 00:09:27.118 }' 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.118 16:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 [2024-12-07 16:35:26.421427] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.688 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.689 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.689 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.689 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.689 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.689 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.689 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.689 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.689 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.689 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.689 "name": "Existed_Raid", 00:09:27.689 "uuid": "24a83e4c-cfa2-4d42-a30d-55653f83b2fa", 00:09:27.689 "strip_size_kb": 0, 00:09:27.689 "state": "configuring", 00:09:27.689 "raid_level": "raid1", 00:09:27.689 "superblock": true, 00:09:27.689 "num_base_bdevs": 3, 00:09:27.689 "num_base_bdevs_discovered": 1, 00:09:27.689 "num_base_bdevs_operational": 3, 00:09:27.689 "base_bdevs_list": [ 00:09:27.689 { 00:09:27.689 "name": "BaseBdev1", 00:09:27.689 "uuid": "426659cc-5cc8-43b0-8695-e80d2b7c4090", 00:09:27.689 "is_configured": true, 00:09:27.689 "data_offset": 2048, 00:09:27.689 "data_size": 63488 00:09:27.689 }, 00:09:27.689 { 00:09:27.689 "name": null, 00:09:27.689 "uuid": "135ad59c-4f90-4450-885f-1d8f01f8a2b5", 00:09:27.689 "is_configured": false, 00:09:27.689 "data_offset": 0, 00:09:27.689 "data_size": 63488 00:09:27.689 }, 00:09:27.689 { 00:09:27.689 "name": null, 00:09:27.689 "uuid": "ce81cf31-b4b8-44e6-bda2-75b0d5b58975", 00:09:27.689 "is_configured": false, 00:09:27.689 "data_offset": 0, 00:09:27.689 "data_size": 63488 00:09:27.689 } 00:09:27.689 ] 00:09:27.689 }' 00:09:27.689 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.689 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.256 [2024-12-07 16:35:26.948616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.256 16:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.256 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.256 "name": "Existed_Raid", 00:09:28.256 "uuid": "24a83e4c-cfa2-4d42-a30d-55653f83b2fa", 00:09:28.256 "strip_size_kb": 0, 00:09:28.256 "state": "configuring", 00:09:28.256 "raid_level": "raid1", 00:09:28.256 "superblock": true, 00:09:28.256 "num_base_bdevs": 3, 00:09:28.256 "num_base_bdevs_discovered": 2, 00:09:28.256 "num_base_bdevs_operational": 3, 00:09:28.256 "base_bdevs_list": [ 00:09:28.256 { 00:09:28.256 "name": "BaseBdev1", 00:09:28.256 "uuid": "426659cc-5cc8-43b0-8695-e80d2b7c4090", 00:09:28.256 "is_configured": true, 00:09:28.256 "data_offset": 2048, 00:09:28.256 "data_size": 63488 00:09:28.256 }, 00:09:28.256 { 00:09:28.256 "name": null, 00:09:28.256 "uuid": "135ad59c-4f90-4450-885f-1d8f01f8a2b5", 00:09:28.256 "is_configured": false, 00:09:28.256 "data_offset": 0, 00:09:28.256 "data_size": 63488 00:09:28.256 }, 00:09:28.256 { 00:09:28.256 "name": "BaseBdev3", 00:09:28.256 "uuid": "ce81cf31-b4b8-44e6-bda2-75b0d5b58975", 00:09:28.256 "is_configured": true, 00:09:28.256 "data_offset": 2048, 00:09:28.256 "data_size": 63488 00:09:28.256 } 00:09:28.256 ] 00:09:28.256 }' 00:09:28.256 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.256 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.515 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.515 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:28.515 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.515 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.774 [2024-12-07 16:35:27.455766] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.774 "name": "Existed_Raid", 00:09:28.774 "uuid": "24a83e4c-cfa2-4d42-a30d-55653f83b2fa", 00:09:28.774 "strip_size_kb": 0, 00:09:28.774 "state": "configuring", 00:09:28.774 "raid_level": "raid1", 00:09:28.774 "superblock": true, 00:09:28.774 "num_base_bdevs": 3, 00:09:28.774 "num_base_bdevs_discovered": 1, 00:09:28.774 "num_base_bdevs_operational": 3, 00:09:28.774 "base_bdevs_list": [ 00:09:28.774 { 00:09:28.774 "name": null, 00:09:28.774 "uuid": "426659cc-5cc8-43b0-8695-e80d2b7c4090", 00:09:28.774 "is_configured": false, 00:09:28.774 "data_offset": 0, 00:09:28.774 "data_size": 63488 00:09:28.774 }, 00:09:28.774 { 00:09:28.774 "name": null, 00:09:28.774 "uuid": "135ad59c-4f90-4450-885f-1d8f01f8a2b5", 00:09:28.774 "is_configured": false, 00:09:28.774 "data_offset": 0, 00:09:28.774 "data_size": 63488 00:09:28.774 }, 00:09:28.774 { 00:09:28.774 "name": "BaseBdev3", 00:09:28.774 "uuid": "ce81cf31-b4b8-44e6-bda2-75b0d5b58975", 00:09:28.774 "is_configured": true, 00:09:28.774 "data_offset": 2048, 00:09:28.774 "data_size": 63488 00:09:28.774 } 00:09:28.774 ] 00:09:28.774 }' 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.774 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.033 [2024-12-07 16:35:27.923449] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.033 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.292 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.292 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.292 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.292 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.292 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.292 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.292 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.292 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.292 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.292 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.292 "name": "Existed_Raid", 00:09:29.292 "uuid": "24a83e4c-cfa2-4d42-a30d-55653f83b2fa", 00:09:29.292 "strip_size_kb": 0, 00:09:29.292 "state": "configuring", 00:09:29.292 "raid_level": "raid1", 00:09:29.292 "superblock": true, 00:09:29.292 "num_base_bdevs": 3, 00:09:29.292 "num_base_bdevs_discovered": 2, 00:09:29.292 "num_base_bdevs_operational": 3, 00:09:29.292 "base_bdevs_list": [ 00:09:29.292 { 00:09:29.292 "name": null, 00:09:29.292 "uuid": "426659cc-5cc8-43b0-8695-e80d2b7c4090", 00:09:29.292 "is_configured": false, 00:09:29.292 "data_offset": 0, 00:09:29.292 "data_size": 63488 00:09:29.292 }, 00:09:29.292 { 00:09:29.292 "name": "BaseBdev2", 00:09:29.292 "uuid": "135ad59c-4f90-4450-885f-1d8f01f8a2b5", 00:09:29.292 "is_configured": true, 00:09:29.292 "data_offset": 2048, 00:09:29.292 "data_size": 63488 00:09:29.292 }, 00:09:29.292 { 00:09:29.292 "name": "BaseBdev3", 00:09:29.292 "uuid": "ce81cf31-b4b8-44e6-bda2-75b0d5b58975", 00:09:29.292 "is_configured": true, 00:09:29.292 "data_offset": 2048, 00:09:29.292 "data_size": 63488 00:09:29.292 } 00:09:29.292 ] 00:09:29.293 }' 00:09:29.293 16:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.293 16:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.551 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.551 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.551 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.551 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:29.551 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.551 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:29.551 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.551 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.551 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.551 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:29.551 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 426659cc-5cc8-43b0-8695-e80d2b7c4090 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.810 [2024-12-07 16:35:28.475424] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:29.810 [2024-12-07 16:35:28.475623] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:29.810 [2024-12-07 16:35:28.475636] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:29.810 [2024-12-07 16:35:28.475959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:29.810 NewBaseBdev 00:09:29.810 [2024-12-07 16:35:28.476110] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:29.810 [2024-12-07 16:35:28.476127] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:29.810 [2024-12-07 16:35:28.476231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.810 [ 00:09:29.810 { 00:09:29.810 "name": "NewBaseBdev", 00:09:29.810 "aliases": [ 00:09:29.810 "426659cc-5cc8-43b0-8695-e80d2b7c4090" 00:09:29.810 ], 00:09:29.810 "product_name": "Malloc disk", 00:09:29.810 "block_size": 512, 00:09:29.810 "num_blocks": 65536, 00:09:29.810 "uuid": "426659cc-5cc8-43b0-8695-e80d2b7c4090", 00:09:29.810 "assigned_rate_limits": { 00:09:29.810 "rw_ios_per_sec": 0, 00:09:29.810 "rw_mbytes_per_sec": 0, 00:09:29.810 "r_mbytes_per_sec": 0, 00:09:29.810 "w_mbytes_per_sec": 0 00:09:29.810 }, 00:09:29.810 "claimed": true, 00:09:29.810 "claim_type": "exclusive_write", 00:09:29.810 "zoned": false, 00:09:29.810 "supported_io_types": { 00:09:29.810 "read": true, 00:09:29.810 "write": true, 00:09:29.810 "unmap": true, 00:09:29.810 "flush": true, 00:09:29.810 "reset": true, 00:09:29.810 "nvme_admin": false, 00:09:29.810 "nvme_io": false, 00:09:29.810 "nvme_io_md": false, 00:09:29.810 "write_zeroes": true, 00:09:29.810 "zcopy": true, 00:09:29.810 "get_zone_info": false, 00:09:29.810 "zone_management": false, 00:09:29.810 "zone_append": false, 00:09:29.810 "compare": false, 00:09:29.810 "compare_and_write": false, 00:09:29.810 "abort": true, 00:09:29.810 "seek_hole": false, 00:09:29.810 "seek_data": false, 00:09:29.810 "copy": true, 00:09:29.810 "nvme_iov_md": false 00:09:29.810 }, 00:09:29.810 "memory_domains": [ 00:09:29.810 { 00:09:29.810 "dma_device_id": "system", 00:09:29.810 "dma_device_type": 1 00:09:29.810 }, 00:09:29.810 { 00:09:29.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.810 "dma_device_type": 2 00:09:29.810 } 00:09:29.810 ], 00:09:29.810 "driver_specific": {} 00:09:29.810 } 00:09:29.810 ] 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.810 "name": "Existed_Raid", 00:09:29.810 "uuid": "24a83e4c-cfa2-4d42-a30d-55653f83b2fa", 00:09:29.810 "strip_size_kb": 0, 00:09:29.810 "state": "online", 00:09:29.810 "raid_level": "raid1", 00:09:29.810 "superblock": true, 00:09:29.810 "num_base_bdevs": 3, 00:09:29.810 "num_base_bdevs_discovered": 3, 00:09:29.810 "num_base_bdevs_operational": 3, 00:09:29.810 "base_bdevs_list": [ 00:09:29.810 { 00:09:29.810 "name": "NewBaseBdev", 00:09:29.810 "uuid": "426659cc-5cc8-43b0-8695-e80d2b7c4090", 00:09:29.810 "is_configured": true, 00:09:29.810 "data_offset": 2048, 00:09:29.810 "data_size": 63488 00:09:29.810 }, 00:09:29.810 { 00:09:29.810 "name": "BaseBdev2", 00:09:29.810 "uuid": "135ad59c-4f90-4450-885f-1d8f01f8a2b5", 00:09:29.810 "is_configured": true, 00:09:29.810 "data_offset": 2048, 00:09:29.810 "data_size": 63488 00:09:29.810 }, 00:09:29.810 { 00:09:29.810 "name": "BaseBdev3", 00:09:29.810 "uuid": "ce81cf31-b4b8-44e6-bda2-75b0d5b58975", 00:09:29.810 "is_configured": true, 00:09:29.810 "data_offset": 2048, 00:09:29.810 "data_size": 63488 00:09:29.810 } 00:09:29.810 ] 00:09:29.810 }' 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.810 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.379 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.379 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:30.379 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.379 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.379 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.379 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.379 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:30.379 16:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.379 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.379 16:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.379 [2024-12-07 16:35:28.995011] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.379 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.379 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.379 "name": "Existed_Raid", 00:09:30.379 "aliases": [ 00:09:30.379 "24a83e4c-cfa2-4d42-a30d-55653f83b2fa" 00:09:30.379 ], 00:09:30.379 "product_name": "Raid Volume", 00:09:30.379 "block_size": 512, 00:09:30.379 "num_blocks": 63488, 00:09:30.379 "uuid": "24a83e4c-cfa2-4d42-a30d-55653f83b2fa", 00:09:30.379 "assigned_rate_limits": { 00:09:30.379 "rw_ios_per_sec": 0, 00:09:30.379 "rw_mbytes_per_sec": 0, 00:09:30.379 "r_mbytes_per_sec": 0, 00:09:30.379 "w_mbytes_per_sec": 0 00:09:30.379 }, 00:09:30.379 "claimed": false, 00:09:30.379 "zoned": false, 00:09:30.379 "supported_io_types": { 00:09:30.379 "read": true, 00:09:30.379 "write": true, 00:09:30.379 "unmap": false, 00:09:30.379 "flush": false, 00:09:30.379 "reset": true, 00:09:30.379 "nvme_admin": false, 00:09:30.379 "nvme_io": false, 00:09:30.379 "nvme_io_md": false, 00:09:30.379 "write_zeroes": true, 00:09:30.379 "zcopy": false, 00:09:30.379 "get_zone_info": false, 00:09:30.379 "zone_management": false, 00:09:30.379 "zone_append": false, 00:09:30.379 "compare": false, 00:09:30.379 "compare_and_write": false, 00:09:30.379 "abort": false, 00:09:30.379 "seek_hole": false, 00:09:30.379 "seek_data": false, 00:09:30.379 "copy": false, 00:09:30.379 "nvme_iov_md": false 00:09:30.379 }, 00:09:30.379 "memory_domains": [ 00:09:30.379 { 00:09:30.379 "dma_device_id": "system", 00:09:30.379 "dma_device_type": 1 00:09:30.379 }, 00:09:30.379 { 00:09:30.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.379 "dma_device_type": 2 00:09:30.379 }, 00:09:30.379 { 00:09:30.379 "dma_device_id": "system", 00:09:30.379 "dma_device_type": 1 00:09:30.379 }, 00:09:30.379 { 00:09:30.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.379 "dma_device_type": 2 00:09:30.379 }, 00:09:30.379 { 00:09:30.379 "dma_device_id": "system", 00:09:30.379 "dma_device_type": 1 00:09:30.379 }, 00:09:30.379 { 00:09:30.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.379 "dma_device_type": 2 00:09:30.379 } 00:09:30.379 ], 00:09:30.379 "driver_specific": { 00:09:30.379 "raid": { 00:09:30.379 "uuid": "24a83e4c-cfa2-4d42-a30d-55653f83b2fa", 00:09:30.379 "strip_size_kb": 0, 00:09:30.379 "state": "online", 00:09:30.379 "raid_level": "raid1", 00:09:30.379 "superblock": true, 00:09:30.379 "num_base_bdevs": 3, 00:09:30.379 "num_base_bdevs_discovered": 3, 00:09:30.379 "num_base_bdevs_operational": 3, 00:09:30.379 "base_bdevs_list": [ 00:09:30.379 { 00:09:30.379 "name": "NewBaseBdev", 00:09:30.379 "uuid": "426659cc-5cc8-43b0-8695-e80d2b7c4090", 00:09:30.379 "is_configured": true, 00:09:30.379 "data_offset": 2048, 00:09:30.380 "data_size": 63488 00:09:30.380 }, 00:09:30.380 { 00:09:30.380 "name": "BaseBdev2", 00:09:30.380 "uuid": "135ad59c-4f90-4450-885f-1d8f01f8a2b5", 00:09:30.380 "is_configured": true, 00:09:30.380 "data_offset": 2048, 00:09:30.380 "data_size": 63488 00:09:30.380 }, 00:09:30.380 { 00:09:30.380 "name": "BaseBdev3", 00:09:30.380 "uuid": "ce81cf31-b4b8-44e6-bda2-75b0d5b58975", 00:09:30.380 "is_configured": true, 00:09:30.380 "data_offset": 2048, 00:09:30.380 "data_size": 63488 00:09:30.380 } 00:09:30.380 ] 00:09:30.380 } 00:09:30.380 } 00:09:30.380 }' 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:30.380 BaseBdev2 00:09:30.380 BaseBdev3' 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.380 [2024-12-07 16:35:29.258222] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.380 [2024-12-07 16:35:29.258265] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.380 [2024-12-07 16:35:29.258388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.380 [2024-12-07 16:35:29.258677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.380 [2024-12-07 16:35:29.258694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79341 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79341 ']' 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79341 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.380 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79341 00:09:30.640 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.640 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.640 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79341' 00:09:30.640 killing process with pid 79341 00:09:30.640 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79341 00:09:30.640 [2024-12-07 16:35:29.308905] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.640 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79341 00:09:30.640 [2024-12-07 16:35:29.371025] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.901 16:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:30.901 00:09:30.901 real 0m9.160s 00:09:30.901 user 0m15.245s 00:09:30.901 sys 0m2.076s 00:09:30.901 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.901 16:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.901 ************************************ 00:09:30.901 END TEST raid_state_function_test_sb 00:09:30.901 ************************************ 00:09:31.161 16:35:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:31.161 16:35:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:31.161 16:35:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.161 16:35:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.161 ************************************ 00:09:31.161 START TEST raid_superblock_test 00:09:31.161 ************************************ 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79950 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79950 00:09:31.161 16:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79950 ']' 00:09:31.162 16:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.162 16:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.162 16:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.162 16:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.162 16:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.162 [2024-12-07 16:35:29.925802] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:31.162 [2024-12-07 16:35:29.926037] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79950 ] 00:09:31.422 [2024-12-07 16:35:30.092023] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.422 [2024-12-07 16:35:30.174777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.422 [2024-12-07 16:35:30.254031] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.422 [2024-12-07 16:35:30.254167] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.992 malloc1 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.992 [2024-12-07 16:35:30.786748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:31.992 [2024-12-07 16:35:30.786874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.992 [2024-12-07 16:35:30.786941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:31.992 [2024-12-07 16:35:30.786987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.992 [2024-12-07 16:35:30.789451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.992 [2024-12-07 16:35:30.789521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:31.992 pt1 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.992 malloc2 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.992 [2024-12-07 16:35:30.834985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:31.992 [2024-12-07 16:35:30.835044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.992 [2024-12-07 16:35:30.835063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:31.992 [2024-12-07 16:35:30.835075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.992 [2024-12-07 16:35:30.837700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.992 [2024-12-07 16:35:30.837790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:31.992 pt2 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.992 malloc3 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.992 [2024-12-07 16:35:30.869955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:31.992 [2024-12-07 16:35:30.870041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.992 [2024-12-07 16:35:30.870077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:31.992 [2024-12-07 16:35:30.870108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.992 [2024-12-07 16:35:30.872642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.992 [2024-12-07 16:35:30.872708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:31.992 pt3 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.992 [2024-12-07 16:35:30.881993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:31.992 [2024-12-07 16:35:30.884254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:31.992 [2024-12-07 16:35:30.884370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:31.992 [2024-12-07 16:35:30.884544] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:31.992 [2024-12-07 16:35:30.884595] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:31.992 [2024-12-07 16:35:30.884878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:31.992 [2024-12-07 16:35:30.885070] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:31.992 [2024-12-07 16:35:30.885119] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:31.992 [2024-12-07 16:35:30.885283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.992 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:31.993 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.993 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.253 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.253 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.253 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.253 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.253 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.253 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.253 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.253 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.253 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.253 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.253 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.253 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.253 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.253 "name": "raid_bdev1", 00:09:32.253 "uuid": "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef", 00:09:32.253 "strip_size_kb": 0, 00:09:32.253 "state": "online", 00:09:32.253 "raid_level": "raid1", 00:09:32.253 "superblock": true, 00:09:32.253 "num_base_bdevs": 3, 00:09:32.253 "num_base_bdevs_discovered": 3, 00:09:32.253 "num_base_bdevs_operational": 3, 00:09:32.253 "base_bdevs_list": [ 00:09:32.254 { 00:09:32.254 "name": "pt1", 00:09:32.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.254 "is_configured": true, 00:09:32.254 "data_offset": 2048, 00:09:32.254 "data_size": 63488 00:09:32.254 }, 00:09:32.254 { 00:09:32.254 "name": "pt2", 00:09:32.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.254 "is_configured": true, 00:09:32.254 "data_offset": 2048, 00:09:32.254 "data_size": 63488 00:09:32.254 }, 00:09:32.254 { 00:09:32.254 "name": "pt3", 00:09:32.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.254 "is_configured": true, 00:09:32.254 "data_offset": 2048, 00:09:32.254 "data_size": 63488 00:09:32.254 } 00:09:32.254 ] 00:09:32.254 }' 00:09:32.254 16:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.254 16:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.529 [2024-12-07 16:35:31.297657] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:32.529 "name": "raid_bdev1", 00:09:32.529 "aliases": [ 00:09:32.529 "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef" 00:09:32.529 ], 00:09:32.529 "product_name": "Raid Volume", 00:09:32.529 "block_size": 512, 00:09:32.529 "num_blocks": 63488, 00:09:32.529 "uuid": "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef", 00:09:32.529 "assigned_rate_limits": { 00:09:32.529 "rw_ios_per_sec": 0, 00:09:32.529 "rw_mbytes_per_sec": 0, 00:09:32.529 "r_mbytes_per_sec": 0, 00:09:32.529 "w_mbytes_per_sec": 0 00:09:32.529 }, 00:09:32.529 "claimed": false, 00:09:32.529 "zoned": false, 00:09:32.529 "supported_io_types": { 00:09:32.529 "read": true, 00:09:32.529 "write": true, 00:09:32.529 "unmap": false, 00:09:32.529 "flush": false, 00:09:32.529 "reset": true, 00:09:32.529 "nvme_admin": false, 00:09:32.529 "nvme_io": false, 00:09:32.529 "nvme_io_md": false, 00:09:32.529 "write_zeroes": true, 00:09:32.529 "zcopy": false, 00:09:32.529 "get_zone_info": false, 00:09:32.529 "zone_management": false, 00:09:32.529 "zone_append": false, 00:09:32.529 "compare": false, 00:09:32.529 "compare_and_write": false, 00:09:32.529 "abort": false, 00:09:32.529 "seek_hole": false, 00:09:32.529 "seek_data": false, 00:09:32.529 "copy": false, 00:09:32.529 "nvme_iov_md": false 00:09:32.529 }, 00:09:32.529 "memory_domains": [ 00:09:32.529 { 00:09:32.529 "dma_device_id": "system", 00:09:32.529 "dma_device_type": 1 00:09:32.529 }, 00:09:32.529 { 00:09:32.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.529 "dma_device_type": 2 00:09:32.529 }, 00:09:32.529 { 00:09:32.529 "dma_device_id": "system", 00:09:32.529 "dma_device_type": 1 00:09:32.529 }, 00:09:32.529 { 00:09:32.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.529 "dma_device_type": 2 00:09:32.529 }, 00:09:32.529 { 00:09:32.529 "dma_device_id": "system", 00:09:32.529 "dma_device_type": 1 00:09:32.529 }, 00:09:32.529 { 00:09:32.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.529 "dma_device_type": 2 00:09:32.529 } 00:09:32.529 ], 00:09:32.529 "driver_specific": { 00:09:32.529 "raid": { 00:09:32.529 "uuid": "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef", 00:09:32.529 "strip_size_kb": 0, 00:09:32.529 "state": "online", 00:09:32.529 "raid_level": "raid1", 00:09:32.529 "superblock": true, 00:09:32.529 "num_base_bdevs": 3, 00:09:32.529 "num_base_bdevs_discovered": 3, 00:09:32.529 "num_base_bdevs_operational": 3, 00:09:32.529 "base_bdevs_list": [ 00:09:32.529 { 00:09:32.529 "name": "pt1", 00:09:32.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.529 "is_configured": true, 00:09:32.529 "data_offset": 2048, 00:09:32.529 "data_size": 63488 00:09:32.529 }, 00:09:32.529 { 00:09:32.529 "name": "pt2", 00:09:32.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.529 "is_configured": true, 00:09:32.529 "data_offset": 2048, 00:09:32.529 "data_size": 63488 00:09:32.529 }, 00:09:32.529 { 00:09:32.529 "name": "pt3", 00:09:32.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.529 "is_configured": true, 00:09:32.529 "data_offset": 2048, 00:09:32.529 "data_size": 63488 00:09:32.529 } 00:09:32.529 ] 00:09:32.529 } 00:09:32.529 } 00:09:32.529 }' 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:32.529 pt2 00:09:32.529 pt3' 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.529 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 [2024-12-07 16:35:31.553161] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef ']' 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 [2024-12-07 16:35:31.600767] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:32.792 [2024-12-07 16:35:31.600859] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.792 [2024-12-07 16:35:31.601004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.792 [2024-12-07 16:35:31.601123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.792 [2024-12-07 16:35:31.601170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.792 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.067 [2024-12-07 16:35:31.748499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:33.067 [2024-12-07 16:35:31.750783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:33.067 [2024-12-07 16:35:31.750832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:33.067 [2024-12-07 16:35:31.750885] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:33.067 [2024-12-07 16:35:31.750954] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:33.067 [2024-12-07 16:35:31.750974] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:33.067 [2024-12-07 16:35:31.750988] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:33.067 [2024-12-07 16:35:31.751008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:33.067 request: 00:09:33.067 { 00:09:33.067 "name": "raid_bdev1", 00:09:33.067 "raid_level": "raid1", 00:09:33.067 "base_bdevs": [ 00:09:33.067 "malloc1", 00:09:33.067 "malloc2", 00:09:33.067 "malloc3" 00:09:33.067 ], 00:09:33.067 "superblock": false, 00:09:33.067 "method": "bdev_raid_create", 00:09:33.067 "req_id": 1 00:09:33.067 } 00:09:33.067 Got JSON-RPC error response 00:09:33.067 response: 00:09:33.067 { 00:09:33.067 "code": -17, 00:09:33.067 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:33.067 } 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.067 [2024-12-07 16:35:31.812330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:33.067 [2024-12-07 16:35:31.812443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.067 [2024-12-07 16:35:31.812481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:33.067 [2024-12-07 16:35:31.812511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.067 [2024-12-07 16:35:31.815007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.067 [2024-12-07 16:35:31.815081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:33.067 [2024-12-07 16:35:31.815186] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:33.067 [2024-12-07 16:35:31.815268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:33.067 pt1 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.067 "name": "raid_bdev1", 00:09:33.067 "uuid": "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef", 00:09:33.067 "strip_size_kb": 0, 00:09:33.067 "state": "configuring", 00:09:33.067 "raid_level": "raid1", 00:09:33.067 "superblock": true, 00:09:33.067 "num_base_bdevs": 3, 00:09:33.067 "num_base_bdevs_discovered": 1, 00:09:33.067 "num_base_bdevs_operational": 3, 00:09:33.067 "base_bdevs_list": [ 00:09:33.067 { 00:09:33.067 "name": "pt1", 00:09:33.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.067 "is_configured": true, 00:09:33.067 "data_offset": 2048, 00:09:33.067 "data_size": 63488 00:09:33.067 }, 00:09:33.067 { 00:09:33.067 "name": null, 00:09:33.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.067 "is_configured": false, 00:09:33.067 "data_offset": 2048, 00:09:33.067 "data_size": 63488 00:09:33.067 }, 00:09:33.067 { 00:09:33.067 "name": null, 00:09:33.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.067 "is_configured": false, 00:09:33.067 "data_offset": 2048, 00:09:33.067 "data_size": 63488 00:09:33.067 } 00:09:33.067 ] 00:09:33.067 }' 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.067 16:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.657 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:33.657 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:33.657 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.657 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.657 [2024-12-07 16:35:32.259649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:33.657 [2024-12-07 16:35:32.259803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.657 [2024-12-07 16:35:32.259847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:33.657 [2024-12-07 16:35:32.259882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.657 [2024-12-07 16:35:32.260444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.657 [2024-12-07 16:35:32.260514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:33.657 [2024-12-07 16:35:32.260622] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:33.657 [2024-12-07 16:35:32.260655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.657 pt2 00:09:33.657 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.657 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:33.657 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.657 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.657 [2024-12-07 16:35:32.271649] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.658 "name": "raid_bdev1", 00:09:33.658 "uuid": "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef", 00:09:33.658 "strip_size_kb": 0, 00:09:33.658 "state": "configuring", 00:09:33.658 "raid_level": "raid1", 00:09:33.658 "superblock": true, 00:09:33.658 "num_base_bdevs": 3, 00:09:33.658 "num_base_bdevs_discovered": 1, 00:09:33.658 "num_base_bdevs_operational": 3, 00:09:33.658 "base_bdevs_list": [ 00:09:33.658 { 00:09:33.658 "name": "pt1", 00:09:33.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.658 "is_configured": true, 00:09:33.658 "data_offset": 2048, 00:09:33.658 "data_size": 63488 00:09:33.658 }, 00:09:33.658 { 00:09:33.658 "name": null, 00:09:33.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.658 "is_configured": false, 00:09:33.658 "data_offset": 0, 00:09:33.658 "data_size": 63488 00:09:33.658 }, 00:09:33.658 { 00:09:33.658 "name": null, 00:09:33.658 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.658 "is_configured": false, 00:09:33.658 "data_offset": 2048, 00:09:33.658 "data_size": 63488 00:09:33.658 } 00:09:33.658 ] 00:09:33.658 }' 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.658 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.919 [2024-12-07 16:35:32.726902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:33.919 [2024-12-07 16:35:32.727041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.919 [2024-12-07 16:35:32.727084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:33.919 [2024-12-07 16:35:32.727113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.919 [2024-12-07 16:35:32.727654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.919 [2024-12-07 16:35:32.727712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:33.919 [2024-12-07 16:35:32.727840] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:33.919 [2024-12-07 16:35:32.727903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.919 pt2 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.919 [2024-12-07 16:35:32.738803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:33.919 [2024-12-07 16:35:32.738880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.919 [2024-12-07 16:35:32.738936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:33.919 [2024-12-07 16:35:32.738963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.919 [2024-12-07 16:35:32.739363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.919 [2024-12-07 16:35:32.739420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:33.919 [2024-12-07 16:35:32.739515] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:33.919 [2024-12-07 16:35:32.739560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:33.919 [2024-12-07 16:35:32.739679] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:33.919 [2024-12-07 16:35:32.739714] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:33.919 [2024-12-07 16:35:32.740004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:33.919 [2024-12-07 16:35:32.740184] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:33.919 [2024-12-07 16:35:32.740227] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:33.919 [2024-12-07 16:35:32.740382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.919 pt3 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.919 "name": "raid_bdev1", 00:09:33.919 "uuid": "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef", 00:09:33.919 "strip_size_kb": 0, 00:09:33.919 "state": "online", 00:09:33.919 "raid_level": "raid1", 00:09:33.919 "superblock": true, 00:09:33.919 "num_base_bdevs": 3, 00:09:33.919 "num_base_bdevs_discovered": 3, 00:09:33.919 "num_base_bdevs_operational": 3, 00:09:33.919 "base_bdevs_list": [ 00:09:33.919 { 00:09:33.919 "name": "pt1", 00:09:33.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.919 "is_configured": true, 00:09:33.919 "data_offset": 2048, 00:09:33.919 "data_size": 63488 00:09:33.919 }, 00:09:33.919 { 00:09:33.919 "name": "pt2", 00:09:33.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.919 "is_configured": true, 00:09:33.919 "data_offset": 2048, 00:09:33.919 "data_size": 63488 00:09:33.919 }, 00:09:33.919 { 00:09:33.919 "name": "pt3", 00:09:33.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.919 "is_configured": true, 00:09:33.919 "data_offset": 2048, 00:09:33.919 "data_size": 63488 00:09:33.919 } 00:09:33.919 ] 00:09:33.919 }' 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.919 16:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.490 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:34.490 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:34.490 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.490 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.491 [2024-12-07 16:35:33.190458] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.491 "name": "raid_bdev1", 00:09:34.491 "aliases": [ 00:09:34.491 "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef" 00:09:34.491 ], 00:09:34.491 "product_name": "Raid Volume", 00:09:34.491 "block_size": 512, 00:09:34.491 "num_blocks": 63488, 00:09:34.491 "uuid": "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef", 00:09:34.491 "assigned_rate_limits": { 00:09:34.491 "rw_ios_per_sec": 0, 00:09:34.491 "rw_mbytes_per_sec": 0, 00:09:34.491 "r_mbytes_per_sec": 0, 00:09:34.491 "w_mbytes_per_sec": 0 00:09:34.491 }, 00:09:34.491 "claimed": false, 00:09:34.491 "zoned": false, 00:09:34.491 "supported_io_types": { 00:09:34.491 "read": true, 00:09:34.491 "write": true, 00:09:34.491 "unmap": false, 00:09:34.491 "flush": false, 00:09:34.491 "reset": true, 00:09:34.491 "nvme_admin": false, 00:09:34.491 "nvme_io": false, 00:09:34.491 "nvme_io_md": false, 00:09:34.491 "write_zeroes": true, 00:09:34.491 "zcopy": false, 00:09:34.491 "get_zone_info": false, 00:09:34.491 "zone_management": false, 00:09:34.491 "zone_append": false, 00:09:34.491 "compare": false, 00:09:34.491 "compare_and_write": false, 00:09:34.491 "abort": false, 00:09:34.491 "seek_hole": false, 00:09:34.491 "seek_data": false, 00:09:34.491 "copy": false, 00:09:34.491 "nvme_iov_md": false 00:09:34.491 }, 00:09:34.491 "memory_domains": [ 00:09:34.491 { 00:09:34.491 "dma_device_id": "system", 00:09:34.491 "dma_device_type": 1 00:09:34.491 }, 00:09:34.491 { 00:09:34.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.491 "dma_device_type": 2 00:09:34.491 }, 00:09:34.491 { 00:09:34.491 "dma_device_id": "system", 00:09:34.491 "dma_device_type": 1 00:09:34.491 }, 00:09:34.491 { 00:09:34.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.491 "dma_device_type": 2 00:09:34.491 }, 00:09:34.491 { 00:09:34.491 "dma_device_id": "system", 00:09:34.491 "dma_device_type": 1 00:09:34.491 }, 00:09:34.491 { 00:09:34.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.491 "dma_device_type": 2 00:09:34.491 } 00:09:34.491 ], 00:09:34.491 "driver_specific": { 00:09:34.491 "raid": { 00:09:34.491 "uuid": "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef", 00:09:34.491 "strip_size_kb": 0, 00:09:34.491 "state": "online", 00:09:34.491 "raid_level": "raid1", 00:09:34.491 "superblock": true, 00:09:34.491 "num_base_bdevs": 3, 00:09:34.491 "num_base_bdevs_discovered": 3, 00:09:34.491 "num_base_bdevs_operational": 3, 00:09:34.491 "base_bdevs_list": [ 00:09:34.491 { 00:09:34.491 "name": "pt1", 00:09:34.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.491 "is_configured": true, 00:09:34.491 "data_offset": 2048, 00:09:34.491 "data_size": 63488 00:09:34.491 }, 00:09:34.491 { 00:09:34.491 "name": "pt2", 00:09:34.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.491 "is_configured": true, 00:09:34.491 "data_offset": 2048, 00:09:34.491 "data_size": 63488 00:09:34.491 }, 00:09:34.491 { 00:09:34.491 "name": "pt3", 00:09:34.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.491 "is_configured": true, 00:09:34.491 "data_offset": 2048, 00:09:34.491 "data_size": 63488 00:09:34.491 } 00:09:34.491 ] 00:09:34.491 } 00:09:34.491 } 00:09:34.491 }' 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:34.491 pt2 00:09:34.491 pt3' 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.491 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.750 [2024-12-07 16:35:33.473928] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef '!=' cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef ']' 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.750 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.750 [2024-12-07 16:35:33.509629] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.751 "name": "raid_bdev1", 00:09:34.751 "uuid": "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef", 00:09:34.751 "strip_size_kb": 0, 00:09:34.751 "state": "online", 00:09:34.751 "raid_level": "raid1", 00:09:34.751 "superblock": true, 00:09:34.751 "num_base_bdevs": 3, 00:09:34.751 "num_base_bdevs_discovered": 2, 00:09:34.751 "num_base_bdevs_operational": 2, 00:09:34.751 "base_bdevs_list": [ 00:09:34.751 { 00:09:34.751 "name": null, 00:09:34.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.751 "is_configured": false, 00:09:34.751 "data_offset": 0, 00:09:34.751 "data_size": 63488 00:09:34.751 }, 00:09:34.751 { 00:09:34.751 "name": "pt2", 00:09:34.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.751 "is_configured": true, 00:09:34.751 "data_offset": 2048, 00:09:34.751 "data_size": 63488 00:09:34.751 }, 00:09:34.751 { 00:09:34.751 "name": "pt3", 00:09:34.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.751 "is_configured": true, 00:09:34.751 "data_offset": 2048, 00:09:34.751 "data_size": 63488 00:09:34.751 } 00:09:34.751 ] 00:09:34.751 }' 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.751 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 [2024-12-07 16:35:33.920914] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.320 [2024-12-07 16:35:33.920953] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.320 [2024-12-07 16:35:33.921060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.320 [2024-12-07 16:35:33.921131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.320 [2024-12-07 16:35:33.921141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:35.320 16:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:35.321 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.321 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.321 16:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.321 [2024-12-07 16:35:34.008706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:35.321 [2024-12-07 16:35:34.008757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.321 [2024-12-07 16:35:34.008778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:35.321 [2024-12-07 16:35:34.008787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.321 [2024-12-07 16:35:34.011299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.321 [2024-12-07 16:35:34.011333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:35.321 [2024-12-07 16:35:34.011423] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:35.321 [2024-12-07 16:35:34.011463] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.321 pt2 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.321 "name": "raid_bdev1", 00:09:35.321 "uuid": "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef", 00:09:35.321 "strip_size_kb": 0, 00:09:35.321 "state": "configuring", 00:09:35.321 "raid_level": "raid1", 00:09:35.321 "superblock": true, 00:09:35.321 "num_base_bdevs": 3, 00:09:35.321 "num_base_bdevs_discovered": 1, 00:09:35.321 "num_base_bdevs_operational": 2, 00:09:35.321 "base_bdevs_list": [ 00:09:35.321 { 00:09:35.321 "name": null, 00:09:35.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.321 "is_configured": false, 00:09:35.321 "data_offset": 2048, 00:09:35.321 "data_size": 63488 00:09:35.321 }, 00:09:35.321 { 00:09:35.321 "name": "pt2", 00:09:35.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.321 "is_configured": true, 00:09:35.321 "data_offset": 2048, 00:09:35.321 "data_size": 63488 00:09:35.321 }, 00:09:35.321 { 00:09:35.321 "name": null, 00:09:35.321 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.321 "is_configured": false, 00:09:35.321 "data_offset": 2048, 00:09:35.321 "data_size": 63488 00:09:35.321 } 00:09:35.321 ] 00:09:35.321 }' 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.321 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.580 [2024-12-07 16:35:34.432081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:35.580 [2024-12-07 16:35:34.432211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.580 [2024-12-07 16:35:34.432255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:35.580 [2024-12-07 16:35:34.432284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.580 [2024-12-07 16:35:34.432805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.580 [2024-12-07 16:35:34.432866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:35.580 [2024-12-07 16:35:34.432994] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:35.580 [2024-12-07 16:35:34.433048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:35.580 [2024-12-07 16:35:34.433181] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:35.580 [2024-12-07 16:35:34.433216] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:35.580 [2024-12-07 16:35:34.433532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:35.580 [2024-12-07 16:35:34.433700] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:35.580 [2024-12-07 16:35:34.433740] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:35.580 [2024-12-07 16:35:34.433897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.580 pt3 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.580 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.839 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.839 "name": "raid_bdev1", 00:09:35.839 "uuid": "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef", 00:09:35.839 "strip_size_kb": 0, 00:09:35.839 "state": "online", 00:09:35.839 "raid_level": "raid1", 00:09:35.839 "superblock": true, 00:09:35.839 "num_base_bdevs": 3, 00:09:35.839 "num_base_bdevs_discovered": 2, 00:09:35.839 "num_base_bdevs_operational": 2, 00:09:35.839 "base_bdevs_list": [ 00:09:35.839 { 00:09:35.839 "name": null, 00:09:35.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.839 "is_configured": false, 00:09:35.839 "data_offset": 2048, 00:09:35.839 "data_size": 63488 00:09:35.839 }, 00:09:35.839 { 00:09:35.839 "name": "pt2", 00:09:35.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.839 "is_configured": true, 00:09:35.839 "data_offset": 2048, 00:09:35.839 "data_size": 63488 00:09:35.839 }, 00:09:35.839 { 00:09:35.839 "name": "pt3", 00:09:35.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.839 "is_configured": true, 00:09:35.839 "data_offset": 2048, 00:09:35.839 "data_size": 63488 00:09:35.839 } 00:09:35.839 ] 00:09:35.840 }' 00:09:35.840 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.840 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.098 [2024-12-07 16:35:34.879242] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.098 [2024-12-07 16:35:34.879309] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.098 [2024-12-07 16:35:34.879398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.098 [2024-12-07 16:35:34.879455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.098 [2024-12-07 16:35:34.879468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.098 [2024-12-07 16:35:34.951107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:36.098 [2024-12-07 16:35:34.951165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.098 [2024-12-07 16:35:34.951180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:36.098 [2024-12-07 16:35:34.951192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.098 [2024-12-07 16:35:34.953657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.098 [2024-12-07 16:35:34.953732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:36.098 [2024-12-07 16:35:34.953800] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:36.098 [2024-12-07 16:35:34.953843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:36.098 [2024-12-07 16:35:34.953947] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:36.098 [2024-12-07 16:35:34.953963] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.098 [2024-12-07 16:35:34.953977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:09:36.098 [2024-12-07 16:35:34.954019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:36.098 pt1 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.098 16:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.361 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.361 "name": "raid_bdev1", 00:09:36.361 "uuid": "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef", 00:09:36.361 "strip_size_kb": 0, 00:09:36.361 "state": "configuring", 00:09:36.361 "raid_level": "raid1", 00:09:36.361 "superblock": true, 00:09:36.361 "num_base_bdevs": 3, 00:09:36.361 "num_base_bdevs_discovered": 1, 00:09:36.361 "num_base_bdevs_operational": 2, 00:09:36.361 "base_bdevs_list": [ 00:09:36.361 { 00:09:36.361 "name": null, 00:09:36.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.361 "is_configured": false, 00:09:36.361 "data_offset": 2048, 00:09:36.361 "data_size": 63488 00:09:36.361 }, 00:09:36.361 { 00:09:36.361 "name": "pt2", 00:09:36.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.361 "is_configured": true, 00:09:36.361 "data_offset": 2048, 00:09:36.361 "data_size": 63488 00:09:36.361 }, 00:09:36.361 { 00:09:36.361 "name": null, 00:09:36.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.361 "is_configured": false, 00:09:36.361 "data_offset": 2048, 00:09:36.361 "data_size": 63488 00:09:36.361 } 00:09:36.361 ] 00:09:36.361 }' 00:09:36.361 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.361 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.628 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:36.628 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.628 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.628 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:36.628 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.628 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:36.628 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.629 [2024-12-07 16:35:35.462332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:36.629 [2024-12-07 16:35:35.462486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.629 [2024-12-07 16:35:35.462526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:36.629 [2024-12-07 16:35:35.462557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.629 [2024-12-07 16:35:35.463101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.629 [2024-12-07 16:35:35.463165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:36.629 [2024-12-07 16:35:35.463292] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:36.629 [2024-12-07 16:35:35.463382] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:36.629 [2024-12-07 16:35:35.463528] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:36.629 [2024-12-07 16:35:35.463567] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:36.629 [2024-12-07 16:35:35.463831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:36.629 [2024-12-07 16:35:35.464009] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:36.629 [2024-12-07 16:35:35.464046] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:36.629 [2024-12-07 16:35:35.464205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.629 pt3 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.629 "name": "raid_bdev1", 00:09:36.629 "uuid": "cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef", 00:09:36.629 "strip_size_kb": 0, 00:09:36.629 "state": "online", 00:09:36.629 "raid_level": "raid1", 00:09:36.629 "superblock": true, 00:09:36.629 "num_base_bdevs": 3, 00:09:36.629 "num_base_bdevs_discovered": 2, 00:09:36.629 "num_base_bdevs_operational": 2, 00:09:36.629 "base_bdevs_list": [ 00:09:36.629 { 00:09:36.629 "name": null, 00:09:36.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.629 "is_configured": false, 00:09:36.629 "data_offset": 2048, 00:09:36.629 "data_size": 63488 00:09:36.629 }, 00:09:36.629 { 00:09:36.629 "name": "pt2", 00:09:36.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.629 "is_configured": true, 00:09:36.629 "data_offset": 2048, 00:09:36.629 "data_size": 63488 00:09:36.629 }, 00:09:36.629 { 00:09:36.629 "name": "pt3", 00:09:36.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.629 "is_configured": true, 00:09:36.629 "data_offset": 2048, 00:09:36.629 "data_size": 63488 00:09:36.629 } 00:09:36.629 ] 00:09:36.629 }' 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.629 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:37.195 [2024-12-07 16:35:35.917810] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef '!=' cb7fe4e4-9e2d-42d5-81cd-5bed928c49ef ']' 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79950 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79950 ']' 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79950 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.195 16:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79950 00:09:37.195 16:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.195 16:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.195 16:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79950' 00:09:37.195 killing process with pid 79950 00:09:37.195 16:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79950 00:09:37.195 [2024-12-07 16:35:36.007769] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.195 [2024-12-07 16:35:36.007914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.195 16:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79950 00:09:37.195 [2024-12-07 16:35:36.008013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.195 [2024-12-07 16:35:36.008023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:37.195 [2024-12-07 16:35:36.068714] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.762 ************************************ 00:09:37.762 END TEST raid_superblock_test 00:09:37.762 ************************************ 00:09:37.762 16:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:37.762 00:09:37.762 real 0m6.614s 00:09:37.762 user 0m10.810s 00:09:37.762 sys 0m1.381s 00:09:37.762 16:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.762 16:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.762 16:35:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:37.762 16:35:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:37.762 16:35:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.762 16:35:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.762 ************************************ 00:09:37.762 START TEST raid_read_error_test 00:09:37.762 ************************************ 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OvASLpTAaz 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80385 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80385 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 80385 ']' 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:37.762 16:35:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.762 [2024-12-07 16:35:36.621790] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:37.762 [2024-12-07 16:35:36.621996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80385 ] 00:09:38.020 [2024-12-07 16:35:36.787028] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.020 [2024-12-07 16:35:36.859565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.279 [2024-12-07 16:35:36.936996] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.279 [2024-12-07 16:35:36.937039] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 BaseBdev1_malloc 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 true 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 [2024-12-07 16:35:37.483281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:38.849 [2024-12-07 16:35:37.483338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.849 [2024-12-07 16:35:37.483371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:38.849 [2024-12-07 16:35:37.483380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.849 [2024-12-07 16:35:37.485795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.849 [2024-12-07 16:35:37.485867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:38.849 BaseBdev1 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 BaseBdev2_malloc 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 true 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 [2024-12-07 16:35:37.539102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:38.849 [2024-12-07 16:35:37.539150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.849 [2024-12-07 16:35:37.539169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:38.849 [2024-12-07 16:35:37.539177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.849 [2024-12-07 16:35:37.541529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.849 [2024-12-07 16:35:37.541599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:38.849 BaseBdev2 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 BaseBdev3_malloc 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 true 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 [2024-12-07 16:35:37.585660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:38.849 [2024-12-07 16:35:37.585702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.849 [2024-12-07 16:35:37.585721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:38.849 [2024-12-07 16:35:37.585729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.849 [2024-12-07 16:35:37.588097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.849 [2024-12-07 16:35:37.588130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:38.849 BaseBdev3 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 [2024-12-07 16:35:37.597708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.849 [2024-12-07 16:35:37.599821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.849 [2024-12-07 16:35:37.599953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.849 [2024-12-07 16:35:37.600141] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:38.849 [2024-12-07 16:35:37.600156] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.849 [2024-12-07 16:35:37.600417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:38.849 [2024-12-07 16:35:37.600571] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:38.849 [2024-12-07 16:35:37.600582] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:38.849 [2024-12-07 16:35:37.600739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.849 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.850 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.850 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.850 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.850 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.850 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.850 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.850 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.850 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.850 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.850 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.850 "name": "raid_bdev1", 00:09:38.850 "uuid": "a1670fb0-747f-4152-b8d5-60474f9942b7", 00:09:38.850 "strip_size_kb": 0, 00:09:38.850 "state": "online", 00:09:38.850 "raid_level": "raid1", 00:09:38.850 "superblock": true, 00:09:38.850 "num_base_bdevs": 3, 00:09:38.850 "num_base_bdevs_discovered": 3, 00:09:38.850 "num_base_bdevs_operational": 3, 00:09:38.850 "base_bdevs_list": [ 00:09:38.850 { 00:09:38.850 "name": "BaseBdev1", 00:09:38.850 "uuid": "461518fa-eadc-57e7-bd5f-00a38de970b8", 00:09:38.850 "is_configured": true, 00:09:38.850 "data_offset": 2048, 00:09:38.850 "data_size": 63488 00:09:38.850 }, 00:09:38.850 { 00:09:38.850 "name": "BaseBdev2", 00:09:38.850 "uuid": "49c46605-fb1a-54ea-bb2a-f853e2703663", 00:09:38.850 "is_configured": true, 00:09:38.850 "data_offset": 2048, 00:09:38.850 "data_size": 63488 00:09:38.850 }, 00:09:38.850 { 00:09:38.850 "name": "BaseBdev3", 00:09:38.850 "uuid": "43d97000-2229-5184-9108-b90f95435da1", 00:09:38.850 "is_configured": true, 00:09:38.850 "data_offset": 2048, 00:09:38.850 "data_size": 63488 00:09:38.850 } 00:09:38.850 ] 00:09:38.850 }' 00:09:38.850 16:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.850 16:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.418 16:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:39.418 16:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:39.418 [2024-12-07 16:35:38.145183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.355 "name": "raid_bdev1", 00:09:40.355 "uuid": "a1670fb0-747f-4152-b8d5-60474f9942b7", 00:09:40.355 "strip_size_kb": 0, 00:09:40.355 "state": "online", 00:09:40.355 "raid_level": "raid1", 00:09:40.355 "superblock": true, 00:09:40.355 "num_base_bdevs": 3, 00:09:40.355 "num_base_bdevs_discovered": 3, 00:09:40.355 "num_base_bdevs_operational": 3, 00:09:40.355 "base_bdevs_list": [ 00:09:40.355 { 00:09:40.355 "name": "BaseBdev1", 00:09:40.355 "uuid": "461518fa-eadc-57e7-bd5f-00a38de970b8", 00:09:40.355 "is_configured": true, 00:09:40.355 "data_offset": 2048, 00:09:40.355 "data_size": 63488 00:09:40.355 }, 00:09:40.355 { 00:09:40.355 "name": "BaseBdev2", 00:09:40.355 "uuid": "49c46605-fb1a-54ea-bb2a-f853e2703663", 00:09:40.355 "is_configured": true, 00:09:40.355 "data_offset": 2048, 00:09:40.355 "data_size": 63488 00:09:40.355 }, 00:09:40.355 { 00:09:40.355 "name": "BaseBdev3", 00:09:40.355 "uuid": "43d97000-2229-5184-9108-b90f95435da1", 00:09:40.355 "is_configured": true, 00:09:40.355 "data_offset": 2048, 00:09:40.355 "data_size": 63488 00:09:40.355 } 00:09:40.355 ] 00:09:40.355 }' 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.355 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.613 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.613 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.613 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.613 [2024-12-07 16:35:39.495833] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.613 [2024-12-07 16:35:39.495942] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.613 [2024-12-07 16:35:39.498598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.613 [2024-12-07 16:35:39.498697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.613 [2024-12-07 16:35:39.498827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.613 [2024-12-07 16:35:39.498867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:40.613 { 00:09:40.613 "results": [ 00:09:40.613 { 00:09:40.613 "job": "raid_bdev1", 00:09:40.613 "core_mask": "0x1", 00:09:40.613 "workload": "randrw", 00:09:40.613 "percentage": 50, 00:09:40.613 "status": "finished", 00:09:40.613 "queue_depth": 1, 00:09:40.613 "io_size": 131072, 00:09:40.613 "runtime": 1.351264, 00:09:40.613 "iops": 10903.864825822342, 00:09:40.613 "mibps": 1362.9831032277928, 00:09:40.613 "io_failed": 0, 00:09:40.613 "io_timeout": 0, 00:09:40.613 "avg_latency_us": 89.1599249100349, 00:09:40.613 "min_latency_us": 23.36419213973799, 00:09:40.613 "max_latency_us": 1480.9991266375546 00:09:40.613 } 00:09:40.613 ], 00:09:40.613 "core_count": 1 00:09:40.613 } 00:09:40.613 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.613 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80385 00:09:40.613 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 80385 ']' 00:09:40.613 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 80385 00:09:40.613 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:40.613 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.871 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80385 00:09:40.871 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:40.871 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:40.871 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80385' 00:09:40.871 killing process with pid 80385 00:09:40.871 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 80385 00:09:40.871 [2024-12-07 16:35:39.544589] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.871 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 80385 00:09:40.871 [2024-12-07 16:35:39.595830] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.129 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OvASLpTAaz 00:09:41.129 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:41.129 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:41.129 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:41.129 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:41.129 ************************************ 00:09:41.129 END TEST raid_read_error_test 00:09:41.129 ************************************ 00:09:41.129 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.129 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:41.129 16:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:41.129 00:09:41.129 real 0m3.456s 00:09:41.129 user 0m4.224s 00:09:41.129 sys 0m0.634s 00:09:41.129 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.129 16:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.129 16:35:40 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:41.129 16:35:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:41.129 16:35:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.129 16:35:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.388 ************************************ 00:09:41.388 START TEST raid_write_error_test 00:09:41.388 ************************************ 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:41.388 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZdPdcx0LfX 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80514 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80514 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80514 ']' 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.389 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.389 [2024-12-07 16:35:40.147485] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:41.389 [2024-12-07 16:35:40.147678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80514 ] 00:09:41.648 [2024-12-07 16:35:40.312754] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.648 [2024-12-07 16:35:40.385204] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.648 [2024-12-07 16:35:40.462084] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.648 [2024-12-07 16:35:40.462196] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.215 BaseBdev1_malloc 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.215 true 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.215 16:35:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.215 [2024-12-07 16:35:41.004630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:42.215 [2024-12-07 16:35:41.004731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.215 [2024-12-07 16:35:41.004771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:42.215 [2024-12-07 16:35:41.004808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.215 [2024-12-07 16:35:41.007309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.215 [2024-12-07 16:35:41.007393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:42.215 BaseBdev1 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.215 BaseBdev2_malloc 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.215 true 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.215 [2024-12-07 16:35:41.064626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:42.215 [2024-12-07 16:35:41.064673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.215 [2024-12-07 16:35:41.064691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:42.215 [2024-12-07 16:35:41.064700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.215 [2024-12-07 16:35:41.067156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.215 [2024-12-07 16:35:41.067231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:42.215 BaseBdev2 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.215 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.216 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:42.216 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.216 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.216 BaseBdev3_malloc 00:09:42.216 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.216 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:42.216 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.216 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.216 true 00:09:42.216 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.216 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:42.216 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.216 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.216 [2024-12-07 16:35:41.111460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:42.216 [2024-12-07 16:35:41.111565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.216 [2024-12-07 16:35:41.111613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:42.216 [2024-12-07 16:35:41.111640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.475 [2024-12-07 16:35:41.113971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.475 [2024-12-07 16:35:41.114046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:42.475 BaseBdev3 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.475 [2024-12-07 16:35:41.123513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.475 [2024-12-07 16:35:41.125565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.475 [2024-12-07 16:35:41.125641] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.475 [2024-12-07 16:35:41.125811] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:42.475 [2024-12-07 16:35:41.125827] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:42.475 [2024-12-07 16:35:41.126066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:42.475 [2024-12-07 16:35:41.126221] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:42.475 [2024-12-07 16:35:41.126232] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:42.475 [2024-12-07 16:35:41.126389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.475 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.476 "name": "raid_bdev1", 00:09:42.476 "uuid": "83b7c8ec-d7ae-40e8-9058-f919a5622745", 00:09:42.476 "strip_size_kb": 0, 00:09:42.476 "state": "online", 00:09:42.476 "raid_level": "raid1", 00:09:42.476 "superblock": true, 00:09:42.476 "num_base_bdevs": 3, 00:09:42.476 "num_base_bdevs_discovered": 3, 00:09:42.476 "num_base_bdevs_operational": 3, 00:09:42.476 "base_bdevs_list": [ 00:09:42.476 { 00:09:42.476 "name": "BaseBdev1", 00:09:42.476 "uuid": "d1d6bbd0-082e-5a76-9bab-e1602a7fb527", 00:09:42.476 "is_configured": true, 00:09:42.476 "data_offset": 2048, 00:09:42.476 "data_size": 63488 00:09:42.476 }, 00:09:42.476 { 00:09:42.476 "name": "BaseBdev2", 00:09:42.476 "uuid": "72d28d7c-f2d6-5fb0-98a9-8654bff6ee8d", 00:09:42.476 "is_configured": true, 00:09:42.476 "data_offset": 2048, 00:09:42.476 "data_size": 63488 00:09:42.476 }, 00:09:42.476 { 00:09:42.476 "name": "BaseBdev3", 00:09:42.476 "uuid": "cbaa7241-57bc-5bb3-b78a-81cf141b2359", 00:09:42.476 "is_configured": true, 00:09:42.476 "data_offset": 2048, 00:09:42.476 "data_size": 63488 00:09:42.476 } 00:09:42.476 ] 00:09:42.476 }' 00:09:42.476 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.476 16:35:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.734 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:42.734 16:35:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:42.993 [2024-12-07 16:35:41.683001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.929 [2024-12-07 16:35:42.618956] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:43.929 [2024-12-07 16:35:42.619095] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.929 [2024-12-07 16:35:42.619384] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.929 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.929 "name": "raid_bdev1", 00:09:43.929 "uuid": "83b7c8ec-d7ae-40e8-9058-f919a5622745", 00:09:43.929 "strip_size_kb": 0, 00:09:43.929 "state": "online", 00:09:43.929 "raid_level": "raid1", 00:09:43.929 "superblock": true, 00:09:43.929 "num_base_bdevs": 3, 00:09:43.929 "num_base_bdevs_discovered": 2, 00:09:43.929 "num_base_bdevs_operational": 2, 00:09:43.929 "base_bdevs_list": [ 00:09:43.929 { 00:09:43.929 "name": null, 00:09:43.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.930 "is_configured": false, 00:09:43.930 "data_offset": 0, 00:09:43.930 "data_size": 63488 00:09:43.930 }, 00:09:43.930 { 00:09:43.930 "name": "BaseBdev2", 00:09:43.930 "uuid": "72d28d7c-f2d6-5fb0-98a9-8654bff6ee8d", 00:09:43.930 "is_configured": true, 00:09:43.930 "data_offset": 2048, 00:09:43.930 "data_size": 63488 00:09:43.930 }, 00:09:43.930 { 00:09:43.930 "name": "BaseBdev3", 00:09:43.930 "uuid": "cbaa7241-57bc-5bb3-b78a-81cf141b2359", 00:09:43.930 "is_configured": true, 00:09:43.930 "data_offset": 2048, 00:09:43.930 "data_size": 63488 00:09:43.930 } 00:09:43.930 ] 00:09:43.930 }' 00:09:43.930 16:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.930 16:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.188 16:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:44.188 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.188 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.447 [2024-12-07 16:35:43.090423] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.447 [2024-12-07 16:35:43.090461] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.447 [2024-12-07 16:35:43.092995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.447 [2024-12-07 16:35:43.093066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.447 [2024-12-07 16:35:43.093159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.447 [2024-12-07 16:35:43.093170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:44.447 { 00:09:44.447 "results": [ 00:09:44.447 { 00:09:44.447 "job": "raid_bdev1", 00:09:44.447 "core_mask": "0x1", 00:09:44.447 "workload": "randrw", 00:09:44.447 "percentage": 50, 00:09:44.447 "status": "finished", 00:09:44.447 "queue_depth": 1, 00:09:44.447 "io_size": 131072, 00:09:44.447 "runtime": 1.408059, 00:09:44.447 "iops": 12849.603603258101, 00:09:44.447 "mibps": 1606.2004504072627, 00:09:44.447 "io_failed": 0, 00:09:44.447 "io_timeout": 0, 00:09:44.447 "avg_latency_us": 75.3046637013953, 00:09:44.447 "min_latency_us": 23.14061135371179, 00:09:44.447 "max_latency_us": 1488.1537117903931 00:09:44.447 } 00:09:44.447 ], 00:09:44.447 "core_count": 1 00:09:44.447 } 00:09:44.447 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.447 16:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80514 00:09:44.447 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80514 ']' 00:09:44.447 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80514 00:09:44.447 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:44.447 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.447 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80514 00:09:44.447 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.447 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.447 killing process with pid 80514 00:09:44.447 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80514' 00:09:44.447 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80514 00:09:44.447 [2024-12-07 16:35:43.126807] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.447 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80514 00:09:44.447 [2024-12-07 16:35:43.174505] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.723 16:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:44.723 16:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZdPdcx0LfX 00:09:44.723 16:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:44.723 16:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:44.723 16:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:44.723 16:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.723 16:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:44.723 16:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:44.723 00:09:44.723 real 0m3.517s 00:09:44.723 user 0m4.294s 00:09:44.723 sys 0m0.653s 00:09:44.723 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.723 ************************************ 00:09:44.723 END TEST raid_write_error_test 00:09:44.723 ************************************ 00:09:44.723 16:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.035 16:35:43 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:45.035 16:35:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:45.035 16:35:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:45.035 16:35:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:45.035 16:35:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.035 16:35:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.035 ************************************ 00:09:45.035 START TEST raid_state_function_test 00:09:45.035 ************************************ 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.035 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:45.036 Process raid pid: 80652 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80652 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80652' 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80652 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80652 ']' 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.036 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.036 [2024-12-07 16:35:43.721695] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:45.036 [2024-12-07 16:35:43.721885] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.036 [2024-12-07 16:35:43.887227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.301 [2024-12-07 16:35:43.958222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.301 [2024-12-07 16:35:44.034687] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.301 [2024-12-07 16:35:44.034828] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.877 [2024-12-07 16:35:44.558120] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.877 [2024-12-07 16:35:44.558247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.877 [2024-12-07 16:35:44.558294] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.877 [2024-12-07 16:35:44.558322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.877 [2024-12-07 16:35:44.558350] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.877 [2024-12-07 16:35:44.558378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.877 [2024-12-07 16:35:44.558397] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:45.877 [2024-12-07 16:35:44.558419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.877 "name": "Existed_Raid", 00:09:45.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.877 "strip_size_kb": 64, 00:09:45.877 "state": "configuring", 00:09:45.877 "raid_level": "raid0", 00:09:45.877 "superblock": false, 00:09:45.877 "num_base_bdevs": 4, 00:09:45.877 "num_base_bdevs_discovered": 0, 00:09:45.877 "num_base_bdevs_operational": 4, 00:09:45.877 "base_bdevs_list": [ 00:09:45.877 { 00:09:45.877 "name": "BaseBdev1", 00:09:45.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.877 "is_configured": false, 00:09:45.877 "data_offset": 0, 00:09:45.877 "data_size": 0 00:09:45.877 }, 00:09:45.877 { 00:09:45.877 "name": "BaseBdev2", 00:09:45.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.877 "is_configured": false, 00:09:45.877 "data_offset": 0, 00:09:45.877 "data_size": 0 00:09:45.877 }, 00:09:45.877 { 00:09:45.877 "name": "BaseBdev3", 00:09:45.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.877 "is_configured": false, 00:09:45.877 "data_offset": 0, 00:09:45.877 "data_size": 0 00:09:45.877 }, 00:09:45.877 { 00:09:45.877 "name": "BaseBdev4", 00:09:45.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.877 "is_configured": false, 00:09:45.877 "data_offset": 0, 00:09:45.877 "data_size": 0 00:09:45.877 } 00:09:45.877 ] 00:09:45.877 }' 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.877 16:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.136 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:46.136 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.136 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.136 [2024-12-07 16:35:45.029248] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:46.136 [2024-12-07 16:35:45.029305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 [2024-12-07 16:35:45.041236] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:46.401 [2024-12-07 16:35:45.041282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:46.401 [2024-12-07 16:35:45.041291] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.401 [2024-12-07 16:35:45.041300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.401 [2024-12-07 16:35:45.041306] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.401 [2024-12-07 16:35:45.041316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.401 [2024-12-07 16:35:45.041322] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:46.401 [2024-12-07 16:35:45.041330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 [2024-12-07 16:35:45.068299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.401 BaseBdev1 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 [ 00:09:46.401 { 00:09:46.401 "name": "BaseBdev1", 00:09:46.401 "aliases": [ 00:09:46.401 "f72c0dc9-a0b3-4020-8138-7a3f14161662" 00:09:46.401 ], 00:09:46.401 "product_name": "Malloc disk", 00:09:46.401 "block_size": 512, 00:09:46.401 "num_blocks": 65536, 00:09:46.401 "uuid": "f72c0dc9-a0b3-4020-8138-7a3f14161662", 00:09:46.401 "assigned_rate_limits": { 00:09:46.401 "rw_ios_per_sec": 0, 00:09:46.401 "rw_mbytes_per_sec": 0, 00:09:46.401 "r_mbytes_per_sec": 0, 00:09:46.401 "w_mbytes_per_sec": 0 00:09:46.401 }, 00:09:46.401 "claimed": true, 00:09:46.401 "claim_type": "exclusive_write", 00:09:46.401 "zoned": false, 00:09:46.401 "supported_io_types": { 00:09:46.401 "read": true, 00:09:46.401 "write": true, 00:09:46.401 "unmap": true, 00:09:46.401 "flush": true, 00:09:46.401 "reset": true, 00:09:46.401 "nvme_admin": false, 00:09:46.401 "nvme_io": false, 00:09:46.401 "nvme_io_md": false, 00:09:46.401 "write_zeroes": true, 00:09:46.401 "zcopy": true, 00:09:46.401 "get_zone_info": false, 00:09:46.401 "zone_management": false, 00:09:46.401 "zone_append": false, 00:09:46.401 "compare": false, 00:09:46.401 "compare_and_write": false, 00:09:46.401 "abort": true, 00:09:46.401 "seek_hole": false, 00:09:46.401 "seek_data": false, 00:09:46.401 "copy": true, 00:09:46.401 "nvme_iov_md": false 00:09:46.401 }, 00:09:46.401 "memory_domains": [ 00:09:46.401 { 00:09:46.401 "dma_device_id": "system", 00:09:46.401 "dma_device_type": 1 00:09:46.401 }, 00:09:46.401 { 00:09:46.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.401 "dma_device_type": 2 00:09:46.401 } 00:09:46.401 ], 00:09:46.401 "driver_specific": {} 00:09:46.401 } 00:09:46.401 ] 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.401 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.401 "name": "Existed_Raid", 00:09:46.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.401 "strip_size_kb": 64, 00:09:46.401 "state": "configuring", 00:09:46.401 "raid_level": "raid0", 00:09:46.401 "superblock": false, 00:09:46.401 "num_base_bdevs": 4, 00:09:46.401 "num_base_bdevs_discovered": 1, 00:09:46.401 "num_base_bdevs_operational": 4, 00:09:46.401 "base_bdevs_list": [ 00:09:46.401 { 00:09:46.401 "name": "BaseBdev1", 00:09:46.401 "uuid": "f72c0dc9-a0b3-4020-8138-7a3f14161662", 00:09:46.401 "is_configured": true, 00:09:46.401 "data_offset": 0, 00:09:46.401 "data_size": 65536 00:09:46.401 }, 00:09:46.401 { 00:09:46.402 "name": "BaseBdev2", 00:09:46.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.402 "is_configured": false, 00:09:46.402 "data_offset": 0, 00:09:46.402 "data_size": 0 00:09:46.402 }, 00:09:46.402 { 00:09:46.402 "name": "BaseBdev3", 00:09:46.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.402 "is_configured": false, 00:09:46.402 "data_offset": 0, 00:09:46.402 "data_size": 0 00:09:46.402 }, 00:09:46.402 { 00:09:46.402 "name": "BaseBdev4", 00:09:46.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.402 "is_configured": false, 00:09:46.402 "data_offset": 0, 00:09:46.402 "data_size": 0 00:09:46.402 } 00:09:46.402 ] 00:09:46.402 }' 00:09:46.402 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.402 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.659 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:46.659 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.659 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.659 [2024-12-07 16:35:45.535565] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:46.659 [2024-12-07 16:35:45.535692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:46.659 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.660 [2024-12-07 16:35:45.547565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.660 [2024-12-07 16:35:45.549815] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.660 [2024-12-07 16:35:45.549893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.660 [2024-12-07 16:35:45.549922] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.660 [2024-12-07 16:35:45.549944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.660 [2024-12-07 16:35:45.549961] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:46.660 [2024-12-07 16:35:45.549980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.660 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.918 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.918 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.918 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.918 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.918 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.918 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.918 "name": "Existed_Raid", 00:09:46.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.918 "strip_size_kb": 64, 00:09:46.918 "state": "configuring", 00:09:46.918 "raid_level": "raid0", 00:09:46.918 "superblock": false, 00:09:46.918 "num_base_bdevs": 4, 00:09:46.918 "num_base_bdevs_discovered": 1, 00:09:46.918 "num_base_bdevs_operational": 4, 00:09:46.918 "base_bdevs_list": [ 00:09:46.918 { 00:09:46.918 "name": "BaseBdev1", 00:09:46.918 "uuid": "f72c0dc9-a0b3-4020-8138-7a3f14161662", 00:09:46.918 "is_configured": true, 00:09:46.918 "data_offset": 0, 00:09:46.918 "data_size": 65536 00:09:46.918 }, 00:09:46.918 { 00:09:46.918 "name": "BaseBdev2", 00:09:46.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.918 "is_configured": false, 00:09:46.918 "data_offset": 0, 00:09:46.918 "data_size": 0 00:09:46.918 }, 00:09:46.918 { 00:09:46.918 "name": "BaseBdev3", 00:09:46.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.918 "is_configured": false, 00:09:46.918 "data_offset": 0, 00:09:46.918 "data_size": 0 00:09:46.918 }, 00:09:46.918 { 00:09:46.918 "name": "BaseBdev4", 00:09:46.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.918 "is_configured": false, 00:09:46.918 "data_offset": 0, 00:09:46.918 "data_size": 0 00:09:46.918 } 00:09:46.918 ] 00:09:46.918 }' 00:09:46.918 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.918 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.177 [2024-12-07 16:35:46.038777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.177 BaseBdev2 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.177 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.177 [ 00:09:47.177 { 00:09:47.177 "name": "BaseBdev2", 00:09:47.177 "aliases": [ 00:09:47.177 "9c317409-13b3-47af-9075-82770a190adb" 00:09:47.177 ], 00:09:47.177 "product_name": "Malloc disk", 00:09:47.177 "block_size": 512, 00:09:47.177 "num_blocks": 65536, 00:09:47.177 "uuid": "9c317409-13b3-47af-9075-82770a190adb", 00:09:47.177 "assigned_rate_limits": { 00:09:47.177 "rw_ios_per_sec": 0, 00:09:47.177 "rw_mbytes_per_sec": 0, 00:09:47.177 "r_mbytes_per_sec": 0, 00:09:47.177 "w_mbytes_per_sec": 0 00:09:47.177 }, 00:09:47.177 "claimed": true, 00:09:47.177 "claim_type": "exclusive_write", 00:09:47.177 "zoned": false, 00:09:47.177 "supported_io_types": { 00:09:47.177 "read": true, 00:09:47.177 "write": true, 00:09:47.177 "unmap": true, 00:09:47.177 "flush": true, 00:09:47.177 "reset": true, 00:09:47.177 "nvme_admin": false, 00:09:47.177 "nvme_io": false, 00:09:47.177 "nvme_io_md": false, 00:09:47.177 "write_zeroes": true, 00:09:47.177 "zcopy": true, 00:09:47.177 "get_zone_info": false, 00:09:47.177 "zone_management": false, 00:09:47.177 "zone_append": false, 00:09:47.177 "compare": false, 00:09:47.177 "compare_and_write": false, 00:09:47.177 "abort": true, 00:09:47.177 "seek_hole": false, 00:09:47.177 "seek_data": false, 00:09:47.177 "copy": true, 00:09:47.177 "nvme_iov_md": false 00:09:47.177 }, 00:09:47.177 "memory_domains": [ 00:09:47.177 { 00:09:47.177 "dma_device_id": "system", 00:09:47.177 "dma_device_type": 1 00:09:47.177 }, 00:09:47.177 { 00:09:47.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.435 "dma_device_type": 2 00:09:47.435 } 00:09:47.435 ], 00:09:47.435 "driver_specific": {} 00:09:47.435 } 00:09:47.435 ] 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.435 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.435 "name": "Existed_Raid", 00:09:47.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.435 "strip_size_kb": 64, 00:09:47.436 "state": "configuring", 00:09:47.436 "raid_level": "raid0", 00:09:47.436 "superblock": false, 00:09:47.436 "num_base_bdevs": 4, 00:09:47.436 "num_base_bdevs_discovered": 2, 00:09:47.436 "num_base_bdevs_operational": 4, 00:09:47.436 "base_bdevs_list": [ 00:09:47.436 { 00:09:47.436 "name": "BaseBdev1", 00:09:47.436 "uuid": "f72c0dc9-a0b3-4020-8138-7a3f14161662", 00:09:47.436 "is_configured": true, 00:09:47.436 "data_offset": 0, 00:09:47.436 "data_size": 65536 00:09:47.436 }, 00:09:47.436 { 00:09:47.436 "name": "BaseBdev2", 00:09:47.436 "uuid": "9c317409-13b3-47af-9075-82770a190adb", 00:09:47.436 "is_configured": true, 00:09:47.436 "data_offset": 0, 00:09:47.436 "data_size": 65536 00:09:47.436 }, 00:09:47.436 { 00:09:47.436 "name": "BaseBdev3", 00:09:47.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.436 "is_configured": false, 00:09:47.436 "data_offset": 0, 00:09:47.436 "data_size": 0 00:09:47.436 }, 00:09:47.436 { 00:09:47.436 "name": "BaseBdev4", 00:09:47.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.436 "is_configured": false, 00:09:47.436 "data_offset": 0, 00:09:47.436 "data_size": 0 00:09:47.436 } 00:09:47.436 ] 00:09:47.436 }' 00:09:47.436 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.436 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.695 [2024-12-07 16:35:46.555102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.695 BaseBdev3 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.695 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.695 [ 00:09:47.695 { 00:09:47.695 "name": "BaseBdev3", 00:09:47.695 "aliases": [ 00:09:47.695 "b4d757b0-617f-4dde-b05d-38fc39246def" 00:09:47.695 ], 00:09:47.695 "product_name": "Malloc disk", 00:09:47.695 "block_size": 512, 00:09:47.695 "num_blocks": 65536, 00:09:47.695 "uuid": "b4d757b0-617f-4dde-b05d-38fc39246def", 00:09:47.695 "assigned_rate_limits": { 00:09:47.695 "rw_ios_per_sec": 0, 00:09:47.695 "rw_mbytes_per_sec": 0, 00:09:47.695 "r_mbytes_per_sec": 0, 00:09:47.695 "w_mbytes_per_sec": 0 00:09:47.695 }, 00:09:47.695 "claimed": true, 00:09:47.695 "claim_type": "exclusive_write", 00:09:47.695 "zoned": false, 00:09:47.695 "supported_io_types": { 00:09:47.695 "read": true, 00:09:47.695 "write": true, 00:09:47.695 "unmap": true, 00:09:47.695 "flush": true, 00:09:47.695 "reset": true, 00:09:47.695 "nvme_admin": false, 00:09:47.695 "nvme_io": false, 00:09:47.695 "nvme_io_md": false, 00:09:47.695 "write_zeroes": true, 00:09:47.695 "zcopy": true, 00:09:47.695 "get_zone_info": false, 00:09:47.695 "zone_management": false, 00:09:47.695 "zone_append": false, 00:09:47.695 "compare": false, 00:09:47.695 "compare_and_write": false, 00:09:47.695 "abort": true, 00:09:47.695 "seek_hole": false, 00:09:47.695 "seek_data": false, 00:09:47.695 "copy": true, 00:09:47.695 "nvme_iov_md": false 00:09:47.695 }, 00:09:47.695 "memory_domains": [ 00:09:47.695 { 00:09:47.695 "dma_device_id": "system", 00:09:47.695 "dma_device_type": 1 00:09:47.695 }, 00:09:47.695 { 00:09:47.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.954 "dma_device_type": 2 00:09:47.954 } 00:09:47.954 ], 00:09:47.954 "driver_specific": {} 00:09:47.954 } 00:09:47.954 ] 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.954 "name": "Existed_Raid", 00:09:47.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.954 "strip_size_kb": 64, 00:09:47.954 "state": "configuring", 00:09:47.954 "raid_level": "raid0", 00:09:47.954 "superblock": false, 00:09:47.954 "num_base_bdevs": 4, 00:09:47.954 "num_base_bdevs_discovered": 3, 00:09:47.954 "num_base_bdevs_operational": 4, 00:09:47.954 "base_bdevs_list": [ 00:09:47.954 { 00:09:47.954 "name": "BaseBdev1", 00:09:47.954 "uuid": "f72c0dc9-a0b3-4020-8138-7a3f14161662", 00:09:47.954 "is_configured": true, 00:09:47.954 "data_offset": 0, 00:09:47.954 "data_size": 65536 00:09:47.954 }, 00:09:47.954 { 00:09:47.954 "name": "BaseBdev2", 00:09:47.954 "uuid": "9c317409-13b3-47af-9075-82770a190adb", 00:09:47.954 "is_configured": true, 00:09:47.954 "data_offset": 0, 00:09:47.954 "data_size": 65536 00:09:47.954 }, 00:09:47.954 { 00:09:47.954 "name": "BaseBdev3", 00:09:47.954 "uuid": "b4d757b0-617f-4dde-b05d-38fc39246def", 00:09:47.954 "is_configured": true, 00:09:47.954 "data_offset": 0, 00:09:47.954 "data_size": 65536 00:09:47.954 }, 00:09:47.954 { 00:09:47.954 "name": "BaseBdev4", 00:09:47.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.954 "is_configured": false, 00:09:47.954 "data_offset": 0, 00:09:47.954 "data_size": 0 00:09:47.954 } 00:09:47.954 ] 00:09:47.954 }' 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.954 16:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.213 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:48.213 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.213 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.213 [2024-12-07 16:35:47.063912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:48.213 [2024-12-07 16:35:47.064056] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:48.214 [2024-12-07 16:35:47.064086] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:48.214 [2024-12-07 16:35:47.064445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:48.214 [2024-12-07 16:35:47.064651] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:48.214 [2024-12-07 16:35:47.064695] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:48.214 [2024-12-07 16:35:47.064983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.214 BaseBdev4 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.214 [ 00:09:48.214 { 00:09:48.214 "name": "BaseBdev4", 00:09:48.214 "aliases": [ 00:09:48.214 "4aa493f4-ee8d-4e28-b891-efde814216c9" 00:09:48.214 ], 00:09:48.214 "product_name": "Malloc disk", 00:09:48.214 "block_size": 512, 00:09:48.214 "num_blocks": 65536, 00:09:48.214 "uuid": "4aa493f4-ee8d-4e28-b891-efde814216c9", 00:09:48.214 "assigned_rate_limits": { 00:09:48.214 "rw_ios_per_sec": 0, 00:09:48.214 "rw_mbytes_per_sec": 0, 00:09:48.214 "r_mbytes_per_sec": 0, 00:09:48.214 "w_mbytes_per_sec": 0 00:09:48.214 }, 00:09:48.214 "claimed": true, 00:09:48.214 "claim_type": "exclusive_write", 00:09:48.214 "zoned": false, 00:09:48.214 "supported_io_types": { 00:09:48.214 "read": true, 00:09:48.214 "write": true, 00:09:48.214 "unmap": true, 00:09:48.214 "flush": true, 00:09:48.214 "reset": true, 00:09:48.214 "nvme_admin": false, 00:09:48.214 "nvme_io": false, 00:09:48.214 "nvme_io_md": false, 00:09:48.214 "write_zeroes": true, 00:09:48.214 "zcopy": true, 00:09:48.214 "get_zone_info": false, 00:09:48.214 "zone_management": false, 00:09:48.214 "zone_append": false, 00:09:48.214 "compare": false, 00:09:48.214 "compare_and_write": false, 00:09:48.214 "abort": true, 00:09:48.214 "seek_hole": false, 00:09:48.214 "seek_data": false, 00:09:48.214 "copy": true, 00:09:48.214 "nvme_iov_md": false 00:09:48.214 }, 00:09:48.214 "memory_domains": [ 00:09:48.214 { 00:09:48.214 "dma_device_id": "system", 00:09:48.214 "dma_device_type": 1 00:09:48.214 }, 00:09:48.214 { 00:09:48.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.214 "dma_device_type": 2 00:09:48.214 } 00:09:48.214 ], 00:09:48.214 "driver_specific": {} 00:09:48.214 } 00:09:48.214 ] 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.214 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.473 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.473 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.473 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.473 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.473 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.473 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.473 "name": "Existed_Raid", 00:09:48.473 "uuid": "433210b1-d590-4bbb-b606-1fbc17a116f6", 00:09:48.473 "strip_size_kb": 64, 00:09:48.473 "state": "online", 00:09:48.473 "raid_level": "raid0", 00:09:48.473 "superblock": false, 00:09:48.473 "num_base_bdevs": 4, 00:09:48.473 "num_base_bdevs_discovered": 4, 00:09:48.473 "num_base_bdevs_operational": 4, 00:09:48.473 "base_bdevs_list": [ 00:09:48.473 { 00:09:48.473 "name": "BaseBdev1", 00:09:48.473 "uuid": "f72c0dc9-a0b3-4020-8138-7a3f14161662", 00:09:48.473 "is_configured": true, 00:09:48.473 "data_offset": 0, 00:09:48.473 "data_size": 65536 00:09:48.473 }, 00:09:48.473 { 00:09:48.473 "name": "BaseBdev2", 00:09:48.473 "uuid": "9c317409-13b3-47af-9075-82770a190adb", 00:09:48.473 "is_configured": true, 00:09:48.473 "data_offset": 0, 00:09:48.473 "data_size": 65536 00:09:48.473 }, 00:09:48.473 { 00:09:48.473 "name": "BaseBdev3", 00:09:48.473 "uuid": "b4d757b0-617f-4dde-b05d-38fc39246def", 00:09:48.473 "is_configured": true, 00:09:48.473 "data_offset": 0, 00:09:48.473 "data_size": 65536 00:09:48.473 }, 00:09:48.473 { 00:09:48.473 "name": "BaseBdev4", 00:09:48.473 "uuid": "4aa493f4-ee8d-4e28-b891-efde814216c9", 00:09:48.473 "is_configured": true, 00:09:48.473 "data_offset": 0, 00:09:48.473 "data_size": 65536 00:09:48.473 } 00:09:48.473 ] 00:09:48.473 }' 00:09:48.473 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.473 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.732 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:48.732 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:48.732 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.732 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.732 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.732 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.732 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:48.732 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.732 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.732 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.732 [2024-12-07 16:35:47.543651] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.732 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.732 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.732 "name": "Existed_Raid", 00:09:48.732 "aliases": [ 00:09:48.732 "433210b1-d590-4bbb-b606-1fbc17a116f6" 00:09:48.732 ], 00:09:48.732 "product_name": "Raid Volume", 00:09:48.732 "block_size": 512, 00:09:48.732 "num_blocks": 262144, 00:09:48.732 "uuid": "433210b1-d590-4bbb-b606-1fbc17a116f6", 00:09:48.732 "assigned_rate_limits": { 00:09:48.732 "rw_ios_per_sec": 0, 00:09:48.732 "rw_mbytes_per_sec": 0, 00:09:48.732 "r_mbytes_per_sec": 0, 00:09:48.732 "w_mbytes_per_sec": 0 00:09:48.732 }, 00:09:48.732 "claimed": false, 00:09:48.732 "zoned": false, 00:09:48.732 "supported_io_types": { 00:09:48.732 "read": true, 00:09:48.732 "write": true, 00:09:48.732 "unmap": true, 00:09:48.732 "flush": true, 00:09:48.732 "reset": true, 00:09:48.732 "nvme_admin": false, 00:09:48.732 "nvme_io": false, 00:09:48.732 "nvme_io_md": false, 00:09:48.732 "write_zeroes": true, 00:09:48.732 "zcopy": false, 00:09:48.732 "get_zone_info": false, 00:09:48.732 "zone_management": false, 00:09:48.732 "zone_append": false, 00:09:48.732 "compare": false, 00:09:48.732 "compare_and_write": false, 00:09:48.732 "abort": false, 00:09:48.732 "seek_hole": false, 00:09:48.732 "seek_data": false, 00:09:48.732 "copy": false, 00:09:48.732 "nvme_iov_md": false 00:09:48.732 }, 00:09:48.732 "memory_domains": [ 00:09:48.732 { 00:09:48.732 "dma_device_id": "system", 00:09:48.732 "dma_device_type": 1 00:09:48.732 }, 00:09:48.732 { 00:09:48.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.732 "dma_device_type": 2 00:09:48.732 }, 00:09:48.733 { 00:09:48.733 "dma_device_id": "system", 00:09:48.733 "dma_device_type": 1 00:09:48.733 }, 00:09:48.733 { 00:09:48.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.733 "dma_device_type": 2 00:09:48.733 }, 00:09:48.733 { 00:09:48.733 "dma_device_id": "system", 00:09:48.733 "dma_device_type": 1 00:09:48.733 }, 00:09:48.733 { 00:09:48.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.733 "dma_device_type": 2 00:09:48.733 }, 00:09:48.733 { 00:09:48.733 "dma_device_id": "system", 00:09:48.733 "dma_device_type": 1 00:09:48.733 }, 00:09:48.733 { 00:09:48.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.733 "dma_device_type": 2 00:09:48.733 } 00:09:48.733 ], 00:09:48.733 "driver_specific": { 00:09:48.733 "raid": { 00:09:48.733 "uuid": "433210b1-d590-4bbb-b606-1fbc17a116f6", 00:09:48.733 "strip_size_kb": 64, 00:09:48.733 "state": "online", 00:09:48.733 "raid_level": "raid0", 00:09:48.733 "superblock": false, 00:09:48.733 "num_base_bdevs": 4, 00:09:48.733 "num_base_bdevs_discovered": 4, 00:09:48.733 "num_base_bdevs_operational": 4, 00:09:48.733 "base_bdevs_list": [ 00:09:48.733 { 00:09:48.733 "name": "BaseBdev1", 00:09:48.733 "uuid": "f72c0dc9-a0b3-4020-8138-7a3f14161662", 00:09:48.733 "is_configured": true, 00:09:48.733 "data_offset": 0, 00:09:48.733 "data_size": 65536 00:09:48.733 }, 00:09:48.733 { 00:09:48.733 "name": "BaseBdev2", 00:09:48.733 "uuid": "9c317409-13b3-47af-9075-82770a190adb", 00:09:48.733 "is_configured": true, 00:09:48.733 "data_offset": 0, 00:09:48.733 "data_size": 65536 00:09:48.733 }, 00:09:48.733 { 00:09:48.733 "name": "BaseBdev3", 00:09:48.733 "uuid": "b4d757b0-617f-4dde-b05d-38fc39246def", 00:09:48.733 "is_configured": true, 00:09:48.733 "data_offset": 0, 00:09:48.733 "data_size": 65536 00:09:48.733 }, 00:09:48.733 { 00:09:48.733 "name": "BaseBdev4", 00:09:48.733 "uuid": "4aa493f4-ee8d-4e28-b891-efde814216c9", 00:09:48.733 "is_configured": true, 00:09:48.733 "data_offset": 0, 00:09:48.733 "data_size": 65536 00:09:48.733 } 00:09:48.733 ] 00:09:48.733 } 00:09:48.733 } 00:09:48.733 }' 00:09:48.733 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:48.992 BaseBdev2 00:09:48.992 BaseBdev3 00:09:48.992 BaseBdev4' 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.992 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.253 [2024-12-07 16:35:47.902657] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:49.253 [2024-12-07 16:35:47.902750] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.253 [2024-12-07 16:35:47.902842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.253 "name": "Existed_Raid", 00:09:49.253 "uuid": "433210b1-d590-4bbb-b606-1fbc17a116f6", 00:09:49.253 "strip_size_kb": 64, 00:09:49.253 "state": "offline", 00:09:49.253 "raid_level": "raid0", 00:09:49.253 "superblock": false, 00:09:49.253 "num_base_bdevs": 4, 00:09:49.253 "num_base_bdevs_discovered": 3, 00:09:49.253 "num_base_bdevs_operational": 3, 00:09:49.253 "base_bdevs_list": [ 00:09:49.253 { 00:09:49.253 "name": null, 00:09:49.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.253 "is_configured": false, 00:09:49.253 "data_offset": 0, 00:09:49.253 "data_size": 65536 00:09:49.253 }, 00:09:49.253 { 00:09:49.253 "name": "BaseBdev2", 00:09:49.253 "uuid": "9c317409-13b3-47af-9075-82770a190adb", 00:09:49.253 "is_configured": true, 00:09:49.253 "data_offset": 0, 00:09:49.253 "data_size": 65536 00:09:49.253 }, 00:09:49.253 { 00:09:49.253 "name": "BaseBdev3", 00:09:49.253 "uuid": "b4d757b0-617f-4dde-b05d-38fc39246def", 00:09:49.253 "is_configured": true, 00:09:49.253 "data_offset": 0, 00:09:49.253 "data_size": 65536 00:09:49.253 }, 00:09:49.253 { 00:09:49.253 "name": "BaseBdev4", 00:09:49.253 "uuid": "4aa493f4-ee8d-4e28-b891-efde814216c9", 00:09:49.253 "is_configured": true, 00:09:49.253 "data_offset": 0, 00:09:49.253 "data_size": 65536 00:09:49.253 } 00:09:49.253 ] 00:09:49.253 }' 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.253 16:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.513 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:49.513 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.513 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.513 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.513 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.513 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.513 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.513 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.513 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.513 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:49.513 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.513 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.513 [2024-12-07 16:35:48.403254] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.773 [2024-12-07 16:35:48.483637] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.773 [2024-12-07 16:35:48.559914] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:49.773 [2024-12-07 16:35:48.560014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.773 BaseBdev2 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.773 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.033 [ 00:09:50.033 { 00:09:50.033 "name": "BaseBdev2", 00:09:50.033 "aliases": [ 00:09:50.033 "d2fb2fc6-60fa-40d6-9387-3338ff297cf5" 00:09:50.033 ], 00:09:50.033 "product_name": "Malloc disk", 00:09:50.033 "block_size": 512, 00:09:50.033 "num_blocks": 65536, 00:09:50.033 "uuid": "d2fb2fc6-60fa-40d6-9387-3338ff297cf5", 00:09:50.033 "assigned_rate_limits": { 00:09:50.033 "rw_ios_per_sec": 0, 00:09:50.033 "rw_mbytes_per_sec": 0, 00:09:50.033 "r_mbytes_per_sec": 0, 00:09:50.033 "w_mbytes_per_sec": 0 00:09:50.033 }, 00:09:50.033 "claimed": false, 00:09:50.033 "zoned": false, 00:09:50.033 "supported_io_types": { 00:09:50.033 "read": true, 00:09:50.033 "write": true, 00:09:50.033 "unmap": true, 00:09:50.033 "flush": true, 00:09:50.033 "reset": true, 00:09:50.033 "nvme_admin": false, 00:09:50.033 "nvme_io": false, 00:09:50.033 "nvme_io_md": false, 00:09:50.033 "write_zeroes": true, 00:09:50.033 "zcopy": true, 00:09:50.033 "get_zone_info": false, 00:09:50.033 "zone_management": false, 00:09:50.033 "zone_append": false, 00:09:50.033 "compare": false, 00:09:50.033 "compare_and_write": false, 00:09:50.033 "abort": true, 00:09:50.033 "seek_hole": false, 00:09:50.033 "seek_data": false, 00:09:50.033 "copy": true, 00:09:50.033 "nvme_iov_md": false 00:09:50.033 }, 00:09:50.033 "memory_domains": [ 00:09:50.033 { 00:09:50.033 "dma_device_id": "system", 00:09:50.033 "dma_device_type": 1 00:09:50.033 }, 00:09:50.033 { 00:09:50.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.033 "dma_device_type": 2 00:09:50.033 } 00:09:50.033 ], 00:09:50.033 "driver_specific": {} 00:09:50.033 } 00:09:50.033 ] 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.033 BaseBdev3 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.033 [ 00:09:50.033 { 00:09:50.033 "name": "BaseBdev3", 00:09:50.033 "aliases": [ 00:09:50.033 "af4620af-aa54-48c2-99ef-426025204639" 00:09:50.033 ], 00:09:50.033 "product_name": "Malloc disk", 00:09:50.033 "block_size": 512, 00:09:50.033 "num_blocks": 65536, 00:09:50.033 "uuid": "af4620af-aa54-48c2-99ef-426025204639", 00:09:50.033 "assigned_rate_limits": { 00:09:50.033 "rw_ios_per_sec": 0, 00:09:50.033 "rw_mbytes_per_sec": 0, 00:09:50.033 "r_mbytes_per_sec": 0, 00:09:50.033 "w_mbytes_per_sec": 0 00:09:50.033 }, 00:09:50.033 "claimed": false, 00:09:50.033 "zoned": false, 00:09:50.033 "supported_io_types": { 00:09:50.033 "read": true, 00:09:50.033 "write": true, 00:09:50.033 "unmap": true, 00:09:50.033 "flush": true, 00:09:50.033 "reset": true, 00:09:50.033 "nvme_admin": false, 00:09:50.033 "nvme_io": false, 00:09:50.033 "nvme_io_md": false, 00:09:50.033 "write_zeroes": true, 00:09:50.033 "zcopy": true, 00:09:50.033 "get_zone_info": false, 00:09:50.033 "zone_management": false, 00:09:50.033 "zone_append": false, 00:09:50.033 "compare": false, 00:09:50.033 "compare_and_write": false, 00:09:50.033 "abort": true, 00:09:50.033 "seek_hole": false, 00:09:50.033 "seek_data": false, 00:09:50.033 "copy": true, 00:09:50.033 "nvme_iov_md": false 00:09:50.033 }, 00:09:50.033 "memory_domains": [ 00:09:50.033 { 00:09:50.033 "dma_device_id": "system", 00:09:50.033 "dma_device_type": 1 00:09:50.033 }, 00:09:50.033 { 00:09:50.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.033 "dma_device_type": 2 00:09:50.033 } 00:09:50.033 ], 00:09:50.033 "driver_specific": {} 00:09:50.033 } 00:09:50.033 ] 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.033 BaseBdev4 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.033 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.033 [ 00:09:50.033 { 00:09:50.033 "name": "BaseBdev4", 00:09:50.033 "aliases": [ 00:09:50.033 "733d4562-b2af-4fe5-8276-d37c376cc834" 00:09:50.033 ], 00:09:50.033 "product_name": "Malloc disk", 00:09:50.033 "block_size": 512, 00:09:50.033 "num_blocks": 65536, 00:09:50.033 "uuid": "733d4562-b2af-4fe5-8276-d37c376cc834", 00:09:50.033 "assigned_rate_limits": { 00:09:50.033 "rw_ios_per_sec": 0, 00:09:50.033 "rw_mbytes_per_sec": 0, 00:09:50.033 "r_mbytes_per_sec": 0, 00:09:50.033 "w_mbytes_per_sec": 0 00:09:50.033 }, 00:09:50.033 "claimed": false, 00:09:50.033 "zoned": false, 00:09:50.033 "supported_io_types": { 00:09:50.033 "read": true, 00:09:50.033 "write": true, 00:09:50.033 "unmap": true, 00:09:50.033 "flush": true, 00:09:50.033 "reset": true, 00:09:50.033 "nvme_admin": false, 00:09:50.033 "nvme_io": false, 00:09:50.033 "nvme_io_md": false, 00:09:50.033 "write_zeroes": true, 00:09:50.033 "zcopy": true, 00:09:50.033 "get_zone_info": false, 00:09:50.033 "zone_management": false, 00:09:50.033 "zone_append": false, 00:09:50.033 "compare": false, 00:09:50.033 "compare_and_write": false, 00:09:50.033 "abort": true, 00:09:50.033 "seek_hole": false, 00:09:50.034 "seek_data": false, 00:09:50.034 "copy": true, 00:09:50.034 "nvme_iov_md": false 00:09:50.034 }, 00:09:50.034 "memory_domains": [ 00:09:50.034 { 00:09:50.034 "dma_device_id": "system", 00:09:50.034 "dma_device_type": 1 00:09:50.034 }, 00:09:50.034 { 00:09:50.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.034 "dma_device_type": 2 00:09:50.034 } 00:09:50.034 ], 00:09:50.034 "driver_specific": {} 00:09:50.034 } 00:09:50.034 ] 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.034 [2024-12-07 16:35:48.817624] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.034 [2024-12-07 16:35:48.817711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.034 [2024-12-07 16:35:48.817755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.034 [2024-12-07 16:35:48.819948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.034 [2024-12-07 16:35:48.820037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.034 "name": "Existed_Raid", 00:09:50.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.034 "strip_size_kb": 64, 00:09:50.034 "state": "configuring", 00:09:50.034 "raid_level": "raid0", 00:09:50.034 "superblock": false, 00:09:50.034 "num_base_bdevs": 4, 00:09:50.034 "num_base_bdevs_discovered": 3, 00:09:50.034 "num_base_bdevs_operational": 4, 00:09:50.034 "base_bdevs_list": [ 00:09:50.034 { 00:09:50.034 "name": "BaseBdev1", 00:09:50.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.034 "is_configured": false, 00:09:50.034 "data_offset": 0, 00:09:50.034 "data_size": 0 00:09:50.034 }, 00:09:50.034 { 00:09:50.034 "name": "BaseBdev2", 00:09:50.034 "uuid": "d2fb2fc6-60fa-40d6-9387-3338ff297cf5", 00:09:50.034 "is_configured": true, 00:09:50.034 "data_offset": 0, 00:09:50.034 "data_size": 65536 00:09:50.034 }, 00:09:50.034 { 00:09:50.034 "name": "BaseBdev3", 00:09:50.034 "uuid": "af4620af-aa54-48c2-99ef-426025204639", 00:09:50.034 "is_configured": true, 00:09:50.034 "data_offset": 0, 00:09:50.034 "data_size": 65536 00:09:50.034 }, 00:09:50.034 { 00:09:50.034 "name": "BaseBdev4", 00:09:50.034 "uuid": "733d4562-b2af-4fe5-8276-d37c376cc834", 00:09:50.034 "is_configured": true, 00:09:50.034 "data_offset": 0, 00:09:50.034 "data_size": 65536 00:09:50.034 } 00:09:50.034 ] 00:09:50.034 }' 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.034 16:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.602 [2024-12-07 16:35:49.280846] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.602 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.602 "name": "Existed_Raid", 00:09:50.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.602 "strip_size_kb": 64, 00:09:50.602 "state": "configuring", 00:09:50.602 "raid_level": "raid0", 00:09:50.602 "superblock": false, 00:09:50.602 "num_base_bdevs": 4, 00:09:50.602 "num_base_bdevs_discovered": 2, 00:09:50.603 "num_base_bdevs_operational": 4, 00:09:50.603 "base_bdevs_list": [ 00:09:50.603 { 00:09:50.603 "name": "BaseBdev1", 00:09:50.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.603 "is_configured": false, 00:09:50.603 "data_offset": 0, 00:09:50.603 "data_size": 0 00:09:50.603 }, 00:09:50.603 { 00:09:50.603 "name": null, 00:09:50.603 "uuid": "d2fb2fc6-60fa-40d6-9387-3338ff297cf5", 00:09:50.603 "is_configured": false, 00:09:50.603 "data_offset": 0, 00:09:50.603 "data_size": 65536 00:09:50.603 }, 00:09:50.603 { 00:09:50.603 "name": "BaseBdev3", 00:09:50.603 "uuid": "af4620af-aa54-48c2-99ef-426025204639", 00:09:50.603 "is_configured": true, 00:09:50.603 "data_offset": 0, 00:09:50.603 "data_size": 65536 00:09:50.603 }, 00:09:50.603 { 00:09:50.603 "name": "BaseBdev4", 00:09:50.603 "uuid": "733d4562-b2af-4fe5-8276-d37c376cc834", 00:09:50.603 "is_configured": true, 00:09:50.603 "data_offset": 0, 00:09:50.603 "data_size": 65536 00:09:50.603 } 00:09:50.603 ] 00:09:50.603 }' 00:09:50.603 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.603 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.862 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:50.862 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.862 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.862 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.121 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.121 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:51.121 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:51.121 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.121 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.121 [2024-12-07 16:35:49.793137] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.121 BaseBdev1 00:09:51.121 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.121 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.122 [ 00:09:51.122 { 00:09:51.122 "name": "BaseBdev1", 00:09:51.122 "aliases": [ 00:09:51.122 "8c5aa8d6-815f-4ffa-aeb8-cce6070a4cac" 00:09:51.122 ], 00:09:51.122 "product_name": "Malloc disk", 00:09:51.122 "block_size": 512, 00:09:51.122 "num_blocks": 65536, 00:09:51.122 "uuid": "8c5aa8d6-815f-4ffa-aeb8-cce6070a4cac", 00:09:51.122 "assigned_rate_limits": { 00:09:51.122 "rw_ios_per_sec": 0, 00:09:51.122 "rw_mbytes_per_sec": 0, 00:09:51.122 "r_mbytes_per_sec": 0, 00:09:51.122 "w_mbytes_per_sec": 0 00:09:51.122 }, 00:09:51.122 "claimed": true, 00:09:51.122 "claim_type": "exclusive_write", 00:09:51.122 "zoned": false, 00:09:51.122 "supported_io_types": { 00:09:51.122 "read": true, 00:09:51.122 "write": true, 00:09:51.122 "unmap": true, 00:09:51.122 "flush": true, 00:09:51.122 "reset": true, 00:09:51.122 "nvme_admin": false, 00:09:51.122 "nvme_io": false, 00:09:51.122 "nvme_io_md": false, 00:09:51.122 "write_zeroes": true, 00:09:51.122 "zcopy": true, 00:09:51.122 "get_zone_info": false, 00:09:51.122 "zone_management": false, 00:09:51.122 "zone_append": false, 00:09:51.122 "compare": false, 00:09:51.122 "compare_and_write": false, 00:09:51.122 "abort": true, 00:09:51.122 "seek_hole": false, 00:09:51.122 "seek_data": false, 00:09:51.122 "copy": true, 00:09:51.122 "nvme_iov_md": false 00:09:51.122 }, 00:09:51.122 "memory_domains": [ 00:09:51.122 { 00:09:51.122 "dma_device_id": "system", 00:09:51.122 "dma_device_type": 1 00:09:51.122 }, 00:09:51.122 { 00:09:51.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.122 "dma_device_type": 2 00:09:51.122 } 00:09:51.122 ], 00:09:51.122 "driver_specific": {} 00:09:51.122 } 00:09:51.122 ] 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.122 "name": "Existed_Raid", 00:09:51.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.122 "strip_size_kb": 64, 00:09:51.122 "state": "configuring", 00:09:51.122 "raid_level": "raid0", 00:09:51.122 "superblock": false, 00:09:51.122 "num_base_bdevs": 4, 00:09:51.122 "num_base_bdevs_discovered": 3, 00:09:51.122 "num_base_bdevs_operational": 4, 00:09:51.122 "base_bdevs_list": [ 00:09:51.122 { 00:09:51.122 "name": "BaseBdev1", 00:09:51.122 "uuid": "8c5aa8d6-815f-4ffa-aeb8-cce6070a4cac", 00:09:51.122 "is_configured": true, 00:09:51.122 "data_offset": 0, 00:09:51.122 "data_size": 65536 00:09:51.122 }, 00:09:51.122 { 00:09:51.122 "name": null, 00:09:51.122 "uuid": "d2fb2fc6-60fa-40d6-9387-3338ff297cf5", 00:09:51.122 "is_configured": false, 00:09:51.122 "data_offset": 0, 00:09:51.122 "data_size": 65536 00:09:51.122 }, 00:09:51.122 { 00:09:51.122 "name": "BaseBdev3", 00:09:51.122 "uuid": "af4620af-aa54-48c2-99ef-426025204639", 00:09:51.122 "is_configured": true, 00:09:51.122 "data_offset": 0, 00:09:51.122 "data_size": 65536 00:09:51.122 }, 00:09:51.122 { 00:09:51.122 "name": "BaseBdev4", 00:09:51.122 "uuid": "733d4562-b2af-4fe5-8276-d37c376cc834", 00:09:51.122 "is_configured": true, 00:09:51.122 "data_offset": 0, 00:09:51.122 "data_size": 65536 00:09:51.122 } 00:09:51.122 ] 00:09:51.122 }' 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.122 16:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.380 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.380 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.380 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.380 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.640 [2024-12-07 16:35:50.316271] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.640 "name": "Existed_Raid", 00:09:51.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.640 "strip_size_kb": 64, 00:09:51.640 "state": "configuring", 00:09:51.640 "raid_level": "raid0", 00:09:51.640 "superblock": false, 00:09:51.640 "num_base_bdevs": 4, 00:09:51.640 "num_base_bdevs_discovered": 2, 00:09:51.640 "num_base_bdevs_operational": 4, 00:09:51.640 "base_bdevs_list": [ 00:09:51.640 { 00:09:51.640 "name": "BaseBdev1", 00:09:51.640 "uuid": "8c5aa8d6-815f-4ffa-aeb8-cce6070a4cac", 00:09:51.640 "is_configured": true, 00:09:51.640 "data_offset": 0, 00:09:51.640 "data_size": 65536 00:09:51.640 }, 00:09:51.640 { 00:09:51.640 "name": null, 00:09:51.640 "uuid": "d2fb2fc6-60fa-40d6-9387-3338ff297cf5", 00:09:51.640 "is_configured": false, 00:09:51.640 "data_offset": 0, 00:09:51.640 "data_size": 65536 00:09:51.640 }, 00:09:51.640 { 00:09:51.640 "name": null, 00:09:51.640 "uuid": "af4620af-aa54-48c2-99ef-426025204639", 00:09:51.640 "is_configured": false, 00:09:51.640 "data_offset": 0, 00:09:51.640 "data_size": 65536 00:09:51.640 }, 00:09:51.640 { 00:09:51.640 "name": "BaseBdev4", 00:09:51.640 "uuid": "733d4562-b2af-4fe5-8276-d37c376cc834", 00:09:51.640 "is_configured": true, 00:09:51.640 "data_offset": 0, 00:09:51.640 "data_size": 65536 00:09:51.640 } 00:09:51.640 ] 00:09:51.640 }' 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.640 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.899 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.899 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.899 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:51.899 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.159 [2024-12-07 16:35:50.823477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.159 "name": "Existed_Raid", 00:09:52.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.159 "strip_size_kb": 64, 00:09:52.159 "state": "configuring", 00:09:52.159 "raid_level": "raid0", 00:09:52.159 "superblock": false, 00:09:52.159 "num_base_bdevs": 4, 00:09:52.159 "num_base_bdevs_discovered": 3, 00:09:52.159 "num_base_bdevs_operational": 4, 00:09:52.159 "base_bdevs_list": [ 00:09:52.159 { 00:09:52.159 "name": "BaseBdev1", 00:09:52.159 "uuid": "8c5aa8d6-815f-4ffa-aeb8-cce6070a4cac", 00:09:52.159 "is_configured": true, 00:09:52.159 "data_offset": 0, 00:09:52.159 "data_size": 65536 00:09:52.159 }, 00:09:52.159 { 00:09:52.159 "name": null, 00:09:52.159 "uuid": "d2fb2fc6-60fa-40d6-9387-3338ff297cf5", 00:09:52.159 "is_configured": false, 00:09:52.159 "data_offset": 0, 00:09:52.159 "data_size": 65536 00:09:52.159 }, 00:09:52.159 { 00:09:52.159 "name": "BaseBdev3", 00:09:52.159 "uuid": "af4620af-aa54-48c2-99ef-426025204639", 00:09:52.159 "is_configured": true, 00:09:52.159 "data_offset": 0, 00:09:52.159 "data_size": 65536 00:09:52.159 }, 00:09:52.159 { 00:09:52.159 "name": "BaseBdev4", 00:09:52.159 "uuid": "733d4562-b2af-4fe5-8276-d37c376cc834", 00:09:52.159 "is_configured": true, 00:09:52.159 "data_offset": 0, 00:09:52.159 "data_size": 65536 00:09:52.159 } 00:09:52.159 ] 00:09:52.159 }' 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.159 16:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.419 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.419 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:52.419 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.419 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.419 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.679 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:52.679 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:52.679 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.679 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.679 [2024-12-07 16:35:51.334676] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:52.679 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.679 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.679 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.679 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.679 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.679 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.679 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.680 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.680 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.680 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.680 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.680 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.680 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.680 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.680 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.680 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.680 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.680 "name": "Existed_Raid", 00:09:52.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.680 "strip_size_kb": 64, 00:09:52.680 "state": "configuring", 00:09:52.680 "raid_level": "raid0", 00:09:52.680 "superblock": false, 00:09:52.680 "num_base_bdevs": 4, 00:09:52.680 "num_base_bdevs_discovered": 2, 00:09:52.680 "num_base_bdevs_operational": 4, 00:09:52.680 "base_bdevs_list": [ 00:09:52.680 { 00:09:52.680 "name": null, 00:09:52.680 "uuid": "8c5aa8d6-815f-4ffa-aeb8-cce6070a4cac", 00:09:52.680 "is_configured": false, 00:09:52.680 "data_offset": 0, 00:09:52.680 "data_size": 65536 00:09:52.680 }, 00:09:52.680 { 00:09:52.680 "name": null, 00:09:52.680 "uuid": "d2fb2fc6-60fa-40d6-9387-3338ff297cf5", 00:09:52.680 "is_configured": false, 00:09:52.680 "data_offset": 0, 00:09:52.680 "data_size": 65536 00:09:52.680 }, 00:09:52.680 { 00:09:52.680 "name": "BaseBdev3", 00:09:52.680 "uuid": "af4620af-aa54-48c2-99ef-426025204639", 00:09:52.680 "is_configured": true, 00:09:52.680 "data_offset": 0, 00:09:52.680 "data_size": 65536 00:09:52.680 }, 00:09:52.680 { 00:09:52.680 "name": "BaseBdev4", 00:09:52.680 "uuid": "733d4562-b2af-4fe5-8276-d37c376cc834", 00:09:52.680 "is_configured": true, 00:09:52.680 "data_offset": 0, 00:09:52.680 "data_size": 65536 00:09:52.680 } 00:09:52.680 ] 00:09:52.680 }' 00:09:52.680 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.680 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.939 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:52.939 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.939 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.939 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.939 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.199 [2024-12-07 16:35:51.841576] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.199 "name": "Existed_Raid", 00:09:53.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.199 "strip_size_kb": 64, 00:09:53.199 "state": "configuring", 00:09:53.199 "raid_level": "raid0", 00:09:53.199 "superblock": false, 00:09:53.199 "num_base_bdevs": 4, 00:09:53.199 "num_base_bdevs_discovered": 3, 00:09:53.199 "num_base_bdevs_operational": 4, 00:09:53.199 "base_bdevs_list": [ 00:09:53.199 { 00:09:53.199 "name": null, 00:09:53.199 "uuid": "8c5aa8d6-815f-4ffa-aeb8-cce6070a4cac", 00:09:53.199 "is_configured": false, 00:09:53.199 "data_offset": 0, 00:09:53.199 "data_size": 65536 00:09:53.199 }, 00:09:53.199 { 00:09:53.199 "name": "BaseBdev2", 00:09:53.199 "uuid": "d2fb2fc6-60fa-40d6-9387-3338ff297cf5", 00:09:53.199 "is_configured": true, 00:09:53.199 "data_offset": 0, 00:09:53.199 "data_size": 65536 00:09:53.199 }, 00:09:53.199 { 00:09:53.199 "name": "BaseBdev3", 00:09:53.199 "uuid": "af4620af-aa54-48c2-99ef-426025204639", 00:09:53.199 "is_configured": true, 00:09:53.199 "data_offset": 0, 00:09:53.199 "data_size": 65536 00:09:53.199 }, 00:09:53.199 { 00:09:53.199 "name": "BaseBdev4", 00:09:53.199 "uuid": "733d4562-b2af-4fe5-8276-d37c376cc834", 00:09:53.199 "is_configured": true, 00:09:53.199 "data_offset": 0, 00:09:53.199 "data_size": 65536 00:09:53.199 } 00:09:53.199 ] 00:09:53.199 }' 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.199 16:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8c5aa8d6-815f-4ffa-aeb8-cce6070a4cac 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.460 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.722 [2024-12-07 16:35:52.361944] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:53.722 [2024-12-07 16:35:52.362058] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:53.722 [2024-12-07 16:35:52.362071] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:53.722 [2024-12-07 16:35:52.362375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:53.722 [2024-12-07 16:35:52.362517] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:53.722 [2024-12-07 16:35:52.362531] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:53.722 [2024-12-07 16:35:52.362763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.722 NewBaseBdev 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.722 [ 00:09:53.722 { 00:09:53.722 "name": "NewBaseBdev", 00:09:53.722 "aliases": [ 00:09:53.722 "8c5aa8d6-815f-4ffa-aeb8-cce6070a4cac" 00:09:53.722 ], 00:09:53.722 "product_name": "Malloc disk", 00:09:53.722 "block_size": 512, 00:09:53.722 "num_blocks": 65536, 00:09:53.722 "uuid": "8c5aa8d6-815f-4ffa-aeb8-cce6070a4cac", 00:09:53.722 "assigned_rate_limits": { 00:09:53.722 "rw_ios_per_sec": 0, 00:09:53.722 "rw_mbytes_per_sec": 0, 00:09:53.722 "r_mbytes_per_sec": 0, 00:09:53.722 "w_mbytes_per_sec": 0 00:09:53.722 }, 00:09:53.722 "claimed": true, 00:09:53.722 "claim_type": "exclusive_write", 00:09:53.722 "zoned": false, 00:09:53.722 "supported_io_types": { 00:09:53.722 "read": true, 00:09:53.722 "write": true, 00:09:53.722 "unmap": true, 00:09:53.722 "flush": true, 00:09:53.722 "reset": true, 00:09:53.722 "nvme_admin": false, 00:09:53.722 "nvme_io": false, 00:09:53.722 "nvme_io_md": false, 00:09:53.722 "write_zeroes": true, 00:09:53.722 "zcopy": true, 00:09:53.722 "get_zone_info": false, 00:09:53.722 "zone_management": false, 00:09:53.722 "zone_append": false, 00:09:53.722 "compare": false, 00:09:53.722 "compare_and_write": false, 00:09:53.722 "abort": true, 00:09:53.722 "seek_hole": false, 00:09:53.722 "seek_data": false, 00:09:53.722 "copy": true, 00:09:53.722 "nvme_iov_md": false 00:09:53.722 }, 00:09:53.722 "memory_domains": [ 00:09:53.722 { 00:09:53.722 "dma_device_id": "system", 00:09:53.722 "dma_device_type": 1 00:09:53.722 }, 00:09:53.722 { 00:09:53.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.722 "dma_device_type": 2 00:09:53.722 } 00:09:53.722 ], 00:09:53.722 "driver_specific": {} 00:09:53.722 } 00:09:53.722 ] 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:53.722 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.723 "name": "Existed_Raid", 00:09:53.723 "uuid": "5f8157b5-7252-497b-824a-2b7fd25fe5ff", 00:09:53.723 "strip_size_kb": 64, 00:09:53.723 "state": "online", 00:09:53.723 "raid_level": "raid0", 00:09:53.723 "superblock": false, 00:09:53.723 "num_base_bdevs": 4, 00:09:53.723 "num_base_bdevs_discovered": 4, 00:09:53.723 "num_base_bdevs_operational": 4, 00:09:53.723 "base_bdevs_list": [ 00:09:53.723 { 00:09:53.723 "name": "NewBaseBdev", 00:09:53.723 "uuid": "8c5aa8d6-815f-4ffa-aeb8-cce6070a4cac", 00:09:53.723 "is_configured": true, 00:09:53.723 "data_offset": 0, 00:09:53.723 "data_size": 65536 00:09:53.723 }, 00:09:53.723 { 00:09:53.723 "name": "BaseBdev2", 00:09:53.723 "uuid": "d2fb2fc6-60fa-40d6-9387-3338ff297cf5", 00:09:53.723 "is_configured": true, 00:09:53.723 "data_offset": 0, 00:09:53.723 "data_size": 65536 00:09:53.723 }, 00:09:53.723 { 00:09:53.723 "name": "BaseBdev3", 00:09:53.723 "uuid": "af4620af-aa54-48c2-99ef-426025204639", 00:09:53.723 "is_configured": true, 00:09:53.723 "data_offset": 0, 00:09:53.723 "data_size": 65536 00:09:53.723 }, 00:09:53.723 { 00:09:53.723 "name": "BaseBdev4", 00:09:53.723 "uuid": "733d4562-b2af-4fe5-8276-d37c376cc834", 00:09:53.723 "is_configured": true, 00:09:53.723 "data_offset": 0, 00:09:53.723 "data_size": 65536 00:09:53.723 } 00:09:53.723 ] 00:09:53.723 }' 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.723 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.983 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:53.983 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:53.983 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.983 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.983 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.983 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.983 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:53.983 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.983 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.983 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.983 [2024-12-07 16:35:52.841560] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.983 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.243 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:54.243 "name": "Existed_Raid", 00:09:54.243 "aliases": [ 00:09:54.243 "5f8157b5-7252-497b-824a-2b7fd25fe5ff" 00:09:54.243 ], 00:09:54.243 "product_name": "Raid Volume", 00:09:54.243 "block_size": 512, 00:09:54.243 "num_blocks": 262144, 00:09:54.243 "uuid": "5f8157b5-7252-497b-824a-2b7fd25fe5ff", 00:09:54.243 "assigned_rate_limits": { 00:09:54.243 "rw_ios_per_sec": 0, 00:09:54.243 "rw_mbytes_per_sec": 0, 00:09:54.243 "r_mbytes_per_sec": 0, 00:09:54.243 "w_mbytes_per_sec": 0 00:09:54.243 }, 00:09:54.243 "claimed": false, 00:09:54.243 "zoned": false, 00:09:54.243 "supported_io_types": { 00:09:54.243 "read": true, 00:09:54.243 "write": true, 00:09:54.243 "unmap": true, 00:09:54.243 "flush": true, 00:09:54.243 "reset": true, 00:09:54.243 "nvme_admin": false, 00:09:54.243 "nvme_io": false, 00:09:54.243 "nvme_io_md": false, 00:09:54.243 "write_zeroes": true, 00:09:54.243 "zcopy": false, 00:09:54.243 "get_zone_info": false, 00:09:54.243 "zone_management": false, 00:09:54.243 "zone_append": false, 00:09:54.243 "compare": false, 00:09:54.243 "compare_and_write": false, 00:09:54.243 "abort": false, 00:09:54.243 "seek_hole": false, 00:09:54.243 "seek_data": false, 00:09:54.243 "copy": false, 00:09:54.243 "nvme_iov_md": false 00:09:54.243 }, 00:09:54.243 "memory_domains": [ 00:09:54.243 { 00:09:54.243 "dma_device_id": "system", 00:09:54.243 "dma_device_type": 1 00:09:54.243 }, 00:09:54.243 { 00:09:54.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.243 "dma_device_type": 2 00:09:54.243 }, 00:09:54.243 { 00:09:54.243 "dma_device_id": "system", 00:09:54.243 "dma_device_type": 1 00:09:54.243 }, 00:09:54.243 { 00:09:54.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.243 "dma_device_type": 2 00:09:54.243 }, 00:09:54.243 { 00:09:54.243 "dma_device_id": "system", 00:09:54.243 "dma_device_type": 1 00:09:54.243 }, 00:09:54.243 { 00:09:54.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.243 "dma_device_type": 2 00:09:54.243 }, 00:09:54.243 { 00:09:54.244 "dma_device_id": "system", 00:09:54.244 "dma_device_type": 1 00:09:54.244 }, 00:09:54.244 { 00:09:54.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.244 "dma_device_type": 2 00:09:54.244 } 00:09:54.244 ], 00:09:54.244 "driver_specific": { 00:09:54.244 "raid": { 00:09:54.244 "uuid": "5f8157b5-7252-497b-824a-2b7fd25fe5ff", 00:09:54.244 "strip_size_kb": 64, 00:09:54.244 "state": "online", 00:09:54.244 "raid_level": "raid0", 00:09:54.244 "superblock": false, 00:09:54.244 "num_base_bdevs": 4, 00:09:54.244 "num_base_bdevs_discovered": 4, 00:09:54.244 "num_base_bdevs_operational": 4, 00:09:54.244 "base_bdevs_list": [ 00:09:54.244 { 00:09:54.244 "name": "NewBaseBdev", 00:09:54.244 "uuid": "8c5aa8d6-815f-4ffa-aeb8-cce6070a4cac", 00:09:54.244 "is_configured": true, 00:09:54.244 "data_offset": 0, 00:09:54.244 "data_size": 65536 00:09:54.244 }, 00:09:54.244 { 00:09:54.244 "name": "BaseBdev2", 00:09:54.244 "uuid": "d2fb2fc6-60fa-40d6-9387-3338ff297cf5", 00:09:54.244 "is_configured": true, 00:09:54.244 "data_offset": 0, 00:09:54.244 "data_size": 65536 00:09:54.244 }, 00:09:54.244 { 00:09:54.244 "name": "BaseBdev3", 00:09:54.244 "uuid": "af4620af-aa54-48c2-99ef-426025204639", 00:09:54.244 "is_configured": true, 00:09:54.244 "data_offset": 0, 00:09:54.244 "data_size": 65536 00:09:54.244 }, 00:09:54.244 { 00:09:54.244 "name": "BaseBdev4", 00:09:54.244 "uuid": "733d4562-b2af-4fe5-8276-d37c376cc834", 00:09:54.244 "is_configured": true, 00:09:54.244 "data_offset": 0, 00:09:54.244 "data_size": 65536 00:09:54.244 } 00:09:54.244 ] 00:09:54.244 } 00:09:54.244 } 00:09:54.244 }' 00:09:54.244 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.244 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:54.244 BaseBdev2 00:09:54.244 BaseBdev3 00:09:54.244 BaseBdev4' 00:09:54.244 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.244 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:54.244 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.244 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:54.244 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.244 16:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.244 16:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.244 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.516 [2024-12-07 16:35:53.172624] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.516 [2024-12-07 16:35:53.172667] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.516 [2024-12-07 16:35:53.172784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.516 [2024-12-07 16:35:53.172866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.516 [2024-12-07 16:35:53.172877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80652 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80652 ']' 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80652 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80652 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.516 killing process with pid 80652 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80652' 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80652 00:09:54.516 [2024-12-07 16:35:53.211752] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.516 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80652 00:09:54.516 [2024-12-07 16:35:53.293160] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.791 16:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:54.791 00:09:54.791 real 0m10.055s 00:09:54.791 user 0m16.828s 00:09:54.791 sys 0m2.210s 00:09:54.791 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.791 16:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.791 ************************************ 00:09:54.791 END TEST raid_state_function_test 00:09:54.791 ************************************ 00:09:55.052 16:35:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:55.052 16:35:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:55.052 16:35:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.052 16:35:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.052 ************************************ 00:09:55.052 START TEST raid_state_function_test_sb 00:09:55.052 ************************************ 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.052 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:55.053 Process raid pid: 81301 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81301 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81301' 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81301 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81301 ']' 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:55.053 16:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.053 [2024-12-07 16:35:53.855252] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:55.053 [2024-12-07 16:35:53.855481] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.313 [2024-12-07 16:35:54.006880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.313 [2024-12-07 16:35:54.076286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.313 [2024-12-07 16:35:54.154632] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.313 [2024-12-07 16:35:54.154675] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.882 [2024-12-07 16:35:54.687096] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.882 [2024-12-07 16:35:54.687151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.882 [2024-12-07 16:35:54.687165] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.882 [2024-12-07 16:35:54.687176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.882 [2024-12-07 16:35:54.687182] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.882 [2024-12-07 16:35:54.687196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.882 [2024-12-07 16:35:54.687202] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:55.882 [2024-12-07 16:35:54.687211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.882 "name": "Existed_Raid", 00:09:55.882 "uuid": "7e705f0a-8719-4725-8dfd-3e7ef3efe1ef", 00:09:55.882 "strip_size_kb": 64, 00:09:55.882 "state": "configuring", 00:09:55.882 "raid_level": "raid0", 00:09:55.882 "superblock": true, 00:09:55.882 "num_base_bdevs": 4, 00:09:55.882 "num_base_bdevs_discovered": 0, 00:09:55.882 "num_base_bdevs_operational": 4, 00:09:55.882 "base_bdevs_list": [ 00:09:55.882 { 00:09:55.882 "name": "BaseBdev1", 00:09:55.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.882 "is_configured": false, 00:09:55.882 "data_offset": 0, 00:09:55.882 "data_size": 0 00:09:55.882 }, 00:09:55.882 { 00:09:55.882 "name": "BaseBdev2", 00:09:55.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.882 "is_configured": false, 00:09:55.882 "data_offset": 0, 00:09:55.882 "data_size": 0 00:09:55.882 }, 00:09:55.882 { 00:09:55.882 "name": "BaseBdev3", 00:09:55.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.882 "is_configured": false, 00:09:55.882 "data_offset": 0, 00:09:55.882 "data_size": 0 00:09:55.882 }, 00:09:55.882 { 00:09:55.882 "name": "BaseBdev4", 00:09:55.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.882 "is_configured": false, 00:09:55.882 "data_offset": 0, 00:09:55.882 "data_size": 0 00:09:55.882 } 00:09:55.882 ] 00:09:55.882 }' 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.882 16:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.450 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.450 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.451 [2024-12-07 16:35:55.138167] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.451 [2024-12-07 16:35:55.138215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.451 [2024-12-07 16:35:55.146212] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.451 [2024-12-07 16:35:55.146303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.451 [2024-12-07 16:35:55.146317] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.451 [2024-12-07 16:35:55.146328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.451 [2024-12-07 16:35:55.146334] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.451 [2024-12-07 16:35:55.146351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.451 [2024-12-07 16:35:55.146357] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:56.451 [2024-12-07 16:35:55.146367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.451 [2024-12-07 16:35:55.169561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.451 BaseBdev1 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.451 [ 00:09:56.451 { 00:09:56.451 "name": "BaseBdev1", 00:09:56.451 "aliases": [ 00:09:56.451 "3cb73487-f656-4536-b5b4-49b7fe5a2143" 00:09:56.451 ], 00:09:56.451 "product_name": "Malloc disk", 00:09:56.451 "block_size": 512, 00:09:56.451 "num_blocks": 65536, 00:09:56.451 "uuid": "3cb73487-f656-4536-b5b4-49b7fe5a2143", 00:09:56.451 "assigned_rate_limits": { 00:09:56.451 "rw_ios_per_sec": 0, 00:09:56.451 "rw_mbytes_per_sec": 0, 00:09:56.451 "r_mbytes_per_sec": 0, 00:09:56.451 "w_mbytes_per_sec": 0 00:09:56.451 }, 00:09:56.451 "claimed": true, 00:09:56.451 "claim_type": "exclusive_write", 00:09:56.451 "zoned": false, 00:09:56.451 "supported_io_types": { 00:09:56.451 "read": true, 00:09:56.451 "write": true, 00:09:56.451 "unmap": true, 00:09:56.451 "flush": true, 00:09:56.451 "reset": true, 00:09:56.451 "nvme_admin": false, 00:09:56.451 "nvme_io": false, 00:09:56.451 "nvme_io_md": false, 00:09:56.451 "write_zeroes": true, 00:09:56.451 "zcopy": true, 00:09:56.451 "get_zone_info": false, 00:09:56.451 "zone_management": false, 00:09:56.451 "zone_append": false, 00:09:56.451 "compare": false, 00:09:56.451 "compare_and_write": false, 00:09:56.451 "abort": true, 00:09:56.451 "seek_hole": false, 00:09:56.451 "seek_data": false, 00:09:56.451 "copy": true, 00:09:56.451 "nvme_iov_md": false 00:09:56.451 }, 00:09:56.451 "memory_domains": [ 00:09:56.451 { 00:09:56.451 "dma_device_id": "system", 00:09:56.451 "dma_device_type": 1 00:09:56.451 }, 00:09:56.451 { 00:09:56.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.451 "dma_device_type": 2 00:09:56.451 } 00:09:56.451 ], 00:09:56.451 "driver_specific": {} 00:09:56.451 } 00:09:56.451 ] 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.451 "name": "Existed_Raid", 00:09:56.451 "uuid": "332aa5bd-8c18-4522-ab9a-2a56508d13f1", 00:09:56.451 "strip_size_kb": 64, 00:09:56.451 "state": "configuring", 00:09:56.451 "raid_level": "raid0", 00:09:56.451 "superblock": true, 00:09:56.451 "num_base_bdevs": 4, 00:09:56.451 "num_base_bdevs_discovered": 1, 00:09:56.451 "num_base_bdevs_operational": 4, 00:09:56.451 "base_bdevs_list": [ 00:09:56.451 { 00:09:56.451 "name": "BaseBdev1", 00:09:56.451 "uuid": "3cb73487-f656-4536-b5b4-49b7fe5a2143", 00:09:56.451 "is_configured": true, 00:09:56.451 "data_offset": 2048, 00:09:56.451 "data_size": 63488 00:09:56.451 }, 00:09:56.451 { 00:09:56.451 "name": "BaseBdev2", 00:09:56.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.451 "is_configured": false, 00:09:56.451 "data_offset": 0, 00:09:56.451 "data_size": 0 00:09:56.451 }, 00:09:56.451 { 00:09:56.451 "name": "BaseBdev3", 00:09:56.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.451 "is_configured": false, 00:09:56.451 "data_offset": 0, 00:09:56.451 "data_size": 0 00:09:56.451 }, 00:09:56.451 { 00:09:56.451 "name": "BaseBdev4", 00:09:56.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.451 "is_configured": false, 00:09:56.451 "data_offset": 0, 00:09:56.451 "data_size": 0 00:09:56.451 } 00:09:56.451 ] 00:09:56.451 }' 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.451 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.019 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.019 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.019 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.019 [2024-12-07 16:35:55.652786] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.019 [2024-12-07 16:35:55.652888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:57.019 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.019 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:57.019 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.019 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.019 [2024-12-07 16:35:55.664802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.019 [2024-12-07 16:35:55.666977] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.019 [2024-12-07 16:35:55.667018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.020 [2024-12-07 16:35:55.667029] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.020 [2024-12-07 16:35:55.667038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.020 [2024-12-07 16:35:55.667044] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:57.020 [2024-12-07 16:35:55.667052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.020 "name": "Existed_Raid", 00:09:57.020 "uuid": "5a186cc6-8eac-43fd-9176-6ed142473605", 00:09:57.020 "strip_size_kb": 64, 00:09:57.020 "state": "configuring", 00:09:57.020 "raid_level": "raid0", 00:09:57.020 "superblock": true, 00:09:57.020 "num_base_bdevs": 4, 00:09:57.020 "num_base_bdevs_discovered": 1, 00:09:57.020 "num_base_bdevs_operational": 4, 00:09:57.020 "base_bdevs_list": [ 00:09:57.020 { 00:09:57.020 "name": "BaseBdev1", 00:09:57.020 "uuid": "3cb73487-f656-4536-b5b4-49b7fe5a2143", 00:09:57.020 "is_configured": true, 00:09:57.020 "data_offset": 2048, 00:09:57.020 "data_size": 63488 00:09:57.020 }, 00:09:57.020 { 00:09:57.020 "name": "BaseBdev2", 00:09:57.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.020 "is_configured": false, 00:09:57.020 "data_offset": 0, 00:09:57.020 "data_size": 0 00:09:57.020 }, 00:09:57.020 { 00:09:57.020 "name": "BaseBdev3", 00:09:57.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.020 "is_configured": false, 00:09:57.020 "data_offset": 0, 00:09:57.020 "data_size": 0 00:09:57.020 }, 00:09:57.020 { 00:09:57.020 "name": "BaseBdev4", 00:09:57.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.020 "is_configured": false, 00:09:57.020 "data_offset": 0, 00:09:57.020 "data_size": 0 00:09:57.020 } 00:09:57.020 ] 00:09:57.020 }' 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.020 16:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.279 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.279 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.279 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.279 [2024-12-07 16:35:56.083563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.279 BaseBdev2 00:09:57.279 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.279 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:57.279 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:57.279 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:57.279 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.280 [ 00:09:57.280 { 00:09:57.280 "name": "BaseBdev2", 00:09:57.280 "aliases": [ 00:09:57.280 "243ff104-7bde-4d30-bae4-de26fb7a3ef4" 00:09:57.280 ], 00:09:57.280 "product_name": "Malloc disk", 00:09:57.280 "block_size": 512, 00:09:57.280 "num_blocks": 65536, 00:09:57.280 "uuid": "243ff104-7bde-4d30-bae4-de26fb7a3ef4", 00:09:57.280 "assigned_rate_limits": { 00:09:57.280 "rw_ios_per_sec": 0, 00:09:57.280 "rw_mbytes_per_sec": 0, 00:09:57.280 "r_mbytes_per_sec": 0, 00:09:57.280 "w_mbytes_per_sec": 0 00:09:57.280 }, 00:09:57.280 "claimed": true, 00:09:57.280 "claim_type": "exclusive_write", 00:09:57.280 "zoned": false, 00:09:57.280 "supported_io_types": { 00:09:57.280 "read": true, 00:09:57.280 "write": true, 00:09:57.280 "unmap": true, 00:09:57.280 "flush": true, 00:09:57.280 "reset": true, 00:09:57.280 "nvme_admin": false, 00:09:57.280 "nvme_io": false, 00:09:57.280 "nvme_io_md": false, 00:09:57.280 "write_zeroes": true, 00:09:57.280 "zcopy": true, 00:09:57.280 "get_zone_info": false, 00:09:57.280 "zone_management": false, 00:09:57.280 "zone_append": false, 00:09:57.280 "compare": false, 00:09:57.280 "compare_and_write": false, 00:09:57.280 "abort": true, 00:09:57.280 "seek_hole": false, 00:09:57.280 "seek_data": false, 00:09:57.280 "copy": true, 00:09:57.280 "nvme_iov_md": false 00:09:57.280 }, 00:09:57.280 "memory_domains": [ 00:09:57.280 { 00:09:57.280 "dma_device_id": "system", 00:09:57.280 "dma_device_type": 1 00:09:57.280 }, 00:09:57.280 { 00:09:57.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.280 "dma_device_type": 2 00:09:57.280 } 00:09:57.280 ], 00:09:57.280 "driver_specific": {} 00:09:57.280 } 00:09:57.280 ] 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.280 "name": "Existed_Raid", 00:09:57.280 "uuid": "5a186cc6-8eac-43fd-9176-6ed142473605", 00:09:57.280 "strip_size_kb": 64, 00:09:57.280 "state": "configuring", 00:09:57.280 "raid_level": "raid0", 00:09:57.280 "superblock": true, 00:09:57.280 "num_base_bdevs": 4, 00:09:57.280 "num_base_bdevs_discovered": 2, 00:09:57.280 "num_base_bdevs_operational": 4, 00:09:57.280 "base_bdevs_list": [ 00:09:57.280 { 00:09:57.280 "name": "BaseBdev1", 00:09:57.280 "uuid": "3cb73487-f656-4536-b5b4-49b7fe5a2143", 00:09:57.280 "is_configured": true, 00:09:57.280 "data_offset": 2048, 00:09:57.280 "data_size": 63488 00:09:57.280 }, 00:09:57.280 { 00:09:57.280 "name": "BaseBdev2", 00:09:57.280 "uuid": "243ff104-7bde-4d30-bae4-de26fb7a3ef4", 00:09:57.280 "is_configured": true, 00:09:57.280 "data_offset": 2048, 00:09:57.280 "data_size": 63488 00:09:57.280 }, 00:09:57.280 { 00:09:57.280 "name": "BaseBdev3", 00:09:57.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.280 "is_configured": false, 00:09:57.280 "data_offset": 0, 00:09:57.280 "data_size": 0 00:09:57.280 }, 00:09:57.280 { 00:09:57.280 "name": "BaseBdev4", 00:09:57.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.280 "is_configured": false, 00:09:57.280 "data_offset": 0, 00:09:57.280 "data_size": 0 00:09:57.280 } 00:09:57.280 ] 00:09:57.280 }' 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.280 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.850 [2024-12-07 16:35:56.552099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.850 BaseBdev3 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.850 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.850 [ 00:09:57.850 { 00:09:57.850 "name": "BaseBdev3", 00:09:57.850 "aliases": [ 00:09:57.850 "84b5b74c-6218-40d4-88c4-2fd0771a0891" 00:09:57.850 ], 00:09:57.850 "product_name": "Malloc disk", 00:09:57.850 "block_size": 512, 00:09:57.850 "num_blocks": 65536, 00:09:57.850 "uuid": "84b5b74c-6218-40d4-88c4-2fd0771a0891", 00:09:57.850 "assigned_rate_limits": { 00:09:57.850 "rw_ios_per_sec": 0, 00:09:57.850 "rw_mbytes_per_sec": 0, 00:09:57.850 "r_mbytes_per_sec": 0, 00:09:57.850 "w_mbytes_per_sec": 0 00:09:57.850 }, 00:09:57.850 "claimed": true, 00:09:57.850 "claim_type": "exclusive_write", 00:09:57.850 "zoned": false, 00:09:57.850 "supported_io_types": { 00:09:57.850 "read": true, 00:09:57.850 "write": true, 00:09:57.850 "unmap": true, 00:09:57.850 "flush": true, 00:09:57.850 "reset": true, 00:09:57.850 "nvme_admin": false, 00:09:57.850 "nvme_io": false, 00:09:57.850 "nvme_io_md": false, 00:09:57.850 "write_zeroes": true, 00:09:57.850 "zcopy": true, 00:09:57.850 "get_zone_info": false, 00:09:57.850 "zone_management": false, 00:09:57.850 "zone_append": false, 00:09:57.850 "compare": false, 00:09:57.850 "compare_and_write": false, 00:09:57.850 "abort": true, 00:09:57.850 "seek_hole": false, 00:09:57.850 "seek_data": false, 00:09:57.850 "copy": true, 00:09:57.850 "nvme_iov_md": false 00:09:57.850 }, 00:09:57.850 "memory_domains": [ 00:09:57.850 { 00:09:57.850 "dma_device_id": "system", 00:09:57.850 "dma_device_type": 1 00:09:57.850 }, 00:09:57.850 { 00:09:57.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.850 "dma_device_type": 2 00:09:57.850 } 00:09:57.851 ], 00:09:57.851 "driver_specific": {} 00:09:57.851 } 00:09:57.851 ] 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.851 "name": "Existed_Raid", 00:09:57.851 "uuid": "5a186cc6-8eac-43fd-9176-6ed142473605", 00:09:57.851 "strip_size_kb": 64, 00:09:57.851 "state": "configuring", 00:09:57.851 "raid_level": "raid0", 00:09:57.851 "superblock": true, 00:09:57.851 "num_base_bdevs": 4, 00:09:57.851 "num_base_bdevs_discovered": 3, 00:09:57.851 "num_base_bdevs_operational": 4, 00:09:57.851 "base_bdevs_list": [ 00:09:57.851 { 00:09:57.851 "name": "BaseBdev1", 00:09:57.851 "uuid": "3cb73487-f656-4536-b5b4-49b7fe5a2143", 00:09:57.851 "is_configured": true, 00:09:57.851 "data_offset": 2048, 00:09:57.851 "data_size": 63488 00:09:57.851 }, 00:09:57.851 { 00:09:57.851 "name": "BaseBdev2", 00:09:57.851 "uuid": "243ff104-7bde-4d30-bae4-de26fb7a3ef4", 00:09:57.851 "is_configured": true, 00:09:57.851 "data_offset": 2048, 00:09:57.851 "data_size": 63488 00:09:57.851 }, 00:09:57.851 { 00:09:57.851 "name": "BaseBdev3", 00:09:57.851 "uuid": "84b5b74c-6218-40d4-88c4-2fd0771a0891", 00:09:57.851 "is_configured": true, 00:09:57.851 "data_offset": 2048, 00:09:57.851 "data_size": 63488 00:09:57.851 }, 00:09:57.851 { 00:09:57.851 "name": "BaseBdev4", 00:09:57.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.851 "is_configured": false, 00:09:57.851 "data_offset": 0, 00:09:57.851 "data_size": 0 00:09:57.851 } 00:09:57.851 ] 00:09:57.851 }' 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.851 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.422 [2024-12-07 16:35:57.060603] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:58.422 [2024-12-07 16:35:57.060950] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:58.422 BaseBdev4 00:09:58.422 [2024-12-07 16:35:57.061022] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:58.422 [2024-12-07 16:35:57.061372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:58.422 [2024-12-07 16:35:57.061542] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:58.422 [2024-12-07 16:35:57.061557] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:58.422 [2024-12-07 16:35:57.061677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.422 [ 00:09:58.422 { 00:09:58.422 "name": "BaseBdev4", 00:09:58.422 "aliases": [ 00:09:58.422 "889f563f-9187-439b-8524-496dfc4a4fc6" 00:09:58.422 ], 00:09:58.422 "product_name": "Malloc disk", 00:09:58.422 "block_size": 512, 00:09:58.422 "num_blocks": 65536, 00:09:58.422 "uuid": "889f563f-9187-439b-8524-496dfc4a4fc6", 00:09:58.422 "assigned_rate_limits": { 00:09:58.422 "rw_ios_per_sec": 0, 00:09:58.422 "rw_mbytes_per_sec": 0, 00:09:58.422 "r_mbytes_per_sec": 0, 00:09:58.422 "w_mbytes_per_sec": 0 00:09:58.422 }, 00:09:58.422 "claimed": true, 00:09:58.422 "claim_type": "exclusive_write", 00:09:58.422 "zoned": false, 00:09:58.422 "supported_io_types": { 00:09:58.422 "read": true, 00:09:58.422 "write": true, 00:09:58.422 "unmap": true, 00:09:58.422 "flush": true, 00:09:58.422 "reset": true, 00:09:58.422 "nvme_admin": false, 00:09:58.422 "nvme_io": false, 00:09:58.422 "nvme_io_md": false, 00:09:58.422 "write_zeroes": true, 00:09:58.422 "zcopy": true, 00:09:58.422 "get_zone_info": false, 00:09:58.422 "zone_management": false, 00:09:58.422 "zone_append": false, 00:09:58.422 "compare": false, 00:09:58.422 "compare_and_write": false, 00:09:58.422 "abort": true, 00:09:58.422 "seek_hole": false, 00:09:58.422 "seek_data": false, 00:09:58.422 "copy": true, 00:09:58.422 "nvme_iov_md": false 00:09:58.422 }, 00:09:58.422 "memory_domains": [ 00:09:58.422 { 00:09:58.422 "dma_device_id": "system", 00:09:58.422 "dma_device_type": 1 00:09:58.422 }, 00:09:58.422 { 00:09:58.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.422 "dma_device_type": 2 00:09:58.422 } 00:09:58.422 ], 00:09:58.422 "driver_specific": {} 00:09:58.422 } 00:09:58.422 ] 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.422 "name": "Existed_Raid", 00:09:58.422 "uuid": "5a186cc6-8eac-43fd-9176-6ed142473605", 00:09:58.422 "strip_size_kb": 64, 00:09:58.422 "state": "online", 00:09:58.422 "raid_level": "raid0", 00:09:58.422 "superblock": true, 00:09:58.422 "num_base_bdevs": 4, 00:09:58.422 "num_base_bdevs_discovered": 4, 00:09:58.422 "num_base_bdevs_operational": 4, 00:09:58.422 "base_bdevs_list": [ 00:09:58.422 { 00:09:58.422 "name": "BaseBdev1", 00:09:58.422 "uuid": "3cb73487-f656-4536-b5b4-49b7fe5a2143", 00:09:58.422 "is_configured": true, 00:09:58.422 "data_offset": 2048, 00:09:58.422 "data_size": 63488 00:09:58.422 }, 00:09:58.422 { 00:09:58.422 "name": "BaseBdev2", 00:09:58.422 "uuid": "243ff104-7bde-4d30-bae4-de26fb7a3ef4", 00:09:58.422 "is_configured": true, 00:09:58.422 "data_offset": 2048, 00:09:58.422 "data_size": 63488 00:09:58.422 }, 00:09:58.422 { 00:09:58.422 "name": "BaseBdev3", 00:09:58.422 "uuid": "84b5b74c-6218-40d4-88c4-2fd0771a0891", 00:09:58.422 "is_configured": true, 00:09:58.422 "data_offset": 2048, 00:09:58.422 "data_size": 63488 00:09:58.422 }, 00:09:58.422 { 00:09:58.422 "name": "BaseBdev4", 00:09:58.422 "uuid": "889f563f-9187-439b-8524-496dfc4a4fc6", 00:09:58.422 "is_configured": true, 00:09:58.422 "data_offset": 2048, 00:09:58.422 "data_size": 63488 00:09:58.422 } 00:09:58.422 ] 00:09:58.422 }' 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.422 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.683 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:58.683 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:58.683 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:58.683 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:58.683 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:58.683 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:58.683 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:58.683 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:58.683 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.683 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.683 [2024-12-07 16:35:57.568163] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.944 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.944 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:58.944 "name": "Existed_Raid", 00:09:58.944 "aliases": [ 00:09:58.944 "5a186cc6-8eac-43fd-9176-6ed142473605" 00:09:58.944 ], 00:09:58.944 "product_name": "Raid Volume", 00:09:58.944 "block_size": 512, 00:09:58.944 "num_blocks": 253952, 00:09:58.944 "uuid": "5a186cc6-8eac-43fd-9176-6ed142473605", 00:09:58.944 "assigned_rate_limits": { 00:09:58.944 "rw_ios_per_sec": 0, 00:09:58.944 "rw_mbytes_per_sec": 0, 00:09:58.944 "r_mbytes_per_sec": 0, 00:09:58.944 "w_mbytes_per_sec": 0 00:09:58.944 }, 00:09:58.944 "claimed": false, 00:09:58.944 "zoned": false, 00:09:58.944 "supported_io_types": { 00:09:58.944 "read": true, 00:09:58.944 "write": true, 00:09:58.944 "unmap": true, 00:09:58.944 "flush": true, 00:09:58.944 "reset": true, 00:09:58.944 "nvme_admin": false, 00:09:58.944 "nvme_io": false, 00:09:58.944 "nvme_io_md": false, 00:09:58.944 "write_zeroes": true, 00:09:58.944 "zcopy": false, 00:09:58.944 "get_zone_info": false, 00:09:58.944 "zone_management": false, 00:09:58.944 "zone_append": false, 00:09:58.944 "compare": false, 00:09:58.944 "compare_and_write": false, 00:09:58.944 "abort": false, 00:09:58.944 "seek_hole": false, 00:09:58.944 "seek_data": false, 00:09:58.944 "copy": false, 00:09:58.944 "nvme_iov_md": false 00:09:58.944 }, 00:09:58.944 "memory_domains": [ 00:09:58.944 { 00:09:58.944 "dma_device_id": "system", 00:09:58.944 "dma_device_type": 1 00:09:58.944 }, 00:09:58.944 { 00:09:58.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.944 "dma_device_type": 2 00:09:58.944 }, 00:09:58.944 { 00:09:58.944 "dma_device_id": "system", 00:09:58.944 "dma_device_type": 1 00:09:58.944 }, 00:09:58.944 { 00:09:58.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.944 "dma_device_type": 2 00:09:58.944 }, 00:09:58.944 { 00:09:58.944 "dma_device_id": "system", 00:09:58.944 "dma_device_type": 1 00:09:58.944 }, 00:09:58.944 { 00:09:58.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.944 "dma_device_type": 2 00:09:58.944 }, 00:09:58.944 { 00:09:58.944 "dma_device_id": "system", 00:09:58.944 "dma_device_type": 1 00:09:58.944 }, 00:09:58.944 { 00:09:58.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.944 "dma_device_type": 2 00:09:58.944 } 00:09:58.944 ], 00:09:58.944 "driver_specific": { 00:09:58.944 "raid": { 00:09:58.944 "uuid": "5a186cc6-8eac-43fd-9176-6ed142473605", 00:09:58.944 "strip_size_kb": 64, 00:09:58.944 "state": "online", 00:09:58.944 "raid_level": "raid0", 00:09:58.944 "superblock": true, 00:09:58.944 "num_base_bdevs": 4, 00:09:58.944 "num_base_bdevs_discovered": 4, 00:09:58.944 "num_base_bdevs_operational": 4, 00:09:58.944 "base_bdevs_list": [ 00:09:58.944 { 00:09:58.944 "name": "BaseBdev1", 00:09:58.944 "uuid": "3cb73487-f656-4536-b5b4-49b7fe5a2143", 00:09:58.944 "is_configured": true, 00:09:58.944 "data_offset": 2048, 00:09:58.944 "data_size": 63488 00:09:58.944 }, 00:09:58.944 { 00:09:58.944 "name": "BaseBdev2", 00:09:58.944 "uuid": "243ff104-7bde-4d30-bae4-de26fb7a3ef4", 00:09:58.944 "is_configured": true, 00:09:58.944 "data_offset": 2048, 00:09:58.944 "data_size": 63488 00:09:58.944 }, 00:09:58.944 { 00:09:58.944 "name": "BaseBdev3", 00:09:58.944 "uuid": "84b5b74c-6218-40d4-88c4-2fd0771a0891", 00:09:58.944 "is_configured": true, 00:09:58.944 "data_offset": 2048, 00:09:58.944 "data_size": 63488 00:09:58.944 }, 00:09:58.944 { 00:09:58.944 "name": "BaseBdev4", 00:09:58.944 "uuid": "889f563f-9187-439b-8524-496dfc4a4fc6", 00:09:58.945 "is_configured": true, 00:09:58.945 "data_offset": 2048, 00:09:58.945 "data_size": 63488 00:09:58.945 } 00:09:58.945 ] 00:09:58.945 } 00:09:58.945 } 00:09:58.945 }' 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:58.945 BaseBdev2 00:09:58.945 BaseBdev3 00:09:58.945 BaseBdev4' 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.945 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.205 [2024-12-07 16:35:57.903273] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.205 [2024-12-07 16:35:57.903381] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.205 [2024-12-07 16:35:57.903486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.205 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.206 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.206 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.206 "name": "Existed_Raid", 00:09:59.206 "uuid": "5a186cc6-8eac-43fd-9176-6ed142473605", 00:09:59.206 "strip_size_kb": 64, 00:09:59.206 "state": "offline", 00:09:59.206 "raid_level": "raid0", 00:09:59.206 "superblock": true, 00:09:59.206 "num_base_bdevs": 4, 00:09:59.206 "num_base_bdevs_discovered": 3, 00:09:59.206 "num_base_bdevs_operational": 3, 00:09:59.206 "base_bdevs_list": [ 00:09:59.206 { 00:09:59.206 "name": null, 00:09:59.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.206 "is_configured": false, 00:09:59.206 "data_offset": 0, 00:09:59.206 "data_size": 63488 00:09:59.206 }, 00:09:59.206 { 00:09:59.206 "name": "BaseBdev2", 00:09:59.206 "uuid": "243ff104-7bde-4d30-bae4-de26fb7a3ef4", 00:09:59.206 "is_configured": true, 00:09:59.206 "data_offset": 2048, 00:09:59.206 "data_size": 63488 00:09:59.206 }, 00:09:59.206 { 00:09:59.206 "name": "BaseBdev3", 00:09:59.206 "uuid": "84b5b74c-6218-40d4-88c4-2fd0771a0891", 00:09:59.206 "is_configured": true, 00:09:59.206 "data_offset": 2048, 00:09:59.206 "data_size": 63488 00:09:59.206 }, 00:09:59.206 { 00:09:59.206 "name": "BaseBdev4", 00:09:59.206 "uuid": "889f563f-9187-439b-8524-496dfc4a4fc6", 00:09:59.206 "is_configured": true, 00:09:59.206 "data_offset": 2048, 00:09:59.206 "data_size": 63488 00:09:59.206 } 00:09:59.206 ] 00:09:59.206 }' 00:09:59.206 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.206 16:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.466 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:59.466 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.466 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.466 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.466 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.466 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.466 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.727 [2024-12-07 16:35:58.379373] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.727 [2024-12-07 16:35:58.448089] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.727 [2024-12-07 16:35:58.528923] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:59.727 [2024-12-07 16:35:58.529069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.727 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.989 BaseBdev2 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.989 [ 00:09:59.989 { 00:09:59.989 "name": "BaseBdev2", 00:09:59.989 "aliases": [ 00:09:59.989 "1fd5d451-5295-4e56-9765-7aa25514a949" 00:09:59.989 ], 00:09:59.989 "product_name": "Malloc disk", 00:09:59.989 "block_size": 512, 00:09:59.989 "num_blocks": 65536, 00:09:59.989 "uuid": "1fd5d451-5295-4e56-9765-7aa25514a949", 00:09:59.989 "assigned_rate_limits": { 00:09:59.989 "rw_ios_per_sec": 0, 00:09:59.989 "rw_mbytes_per_sec": 0, 00:09:59.989 "r_mbytes_per_sec": 0, 00:09:59.989 "w_mbytes_per_sec": 0 00:09:59.989 }, 00:09:59.989 "claimed": false, 00:09:59.989 "zoned": false, 00:09:59.989 "supported_io_types": { 00:09:59.989 "read": true, 00:09:59.989 "write": true, 00:09:59.989 "unmap": true, 00:09:59.989 "flush": true, 00:09:59.989 "reset": true, 00:09:59.989 "nvme_admin": false, 00:09:59.989 "nvme_io": false, 00:09:59.989 "nvme_io_md": false, 00:09:59.989 "write_zeroes": true, 00:09:59.989 "zcopy": true, 00:09:59.989 "get_zone_info": false, 00:09:59.989 "zone_management": false, 00:09:59.989 "zone_append": false, 00:09:59.989 "compare": false, 00:09:59.989 "compare_and_write": false, 00:09:59.989 "abort": true, 00:09:59.989 "seek_hole": false, 00:09:59.989 "seek_data": false, 00:09:59.989 "copy": true, 00:09:59.989 "nvme_iov_md": false 00:09:59.989 }, 00:09:59.989 "memory_domains": [ 00:09:59.989 { 00:09:59.989 "dma_device_id": "system", 00:09:59.989 "dma_device_type": 1 00:09:59.989 }, 00:09:59.989 { 00:09:59.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.989 "dma_device_type": 2 00:09:59.989 } 00:09:59.989 ], 00:09:59.989 "driver_specific": {} 00:09:59.989 } 00:09:59.989 ] 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.989 BaseBdev3 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.989 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.989 [ 00:09:59.989 { 00:09:59.989 "name": "BaseBdev3", 00:09:59.989 "aliases": [ 00:09:59.989 "a86452a2-9d88-483c-a636-8f0d886cce03" 00:09:59.989 ], 00:09:59.989 "product_name": "Malloc disk", 00:09:59.989 "block_size": 512, 00:09:59.989 "num_blocks": 65536, 00:09:59.990 "uuid": "a86452a2-9d88-483c-a636-8f0d886cce03", 00:09:59.990 "assigned_rate_limits": { 00:09:59.990 "rw_ios_per_sec": 0, 00:09:59.990 "rw_mbytes_per_sec": 0, 00:09:59.990 "r_mbytes_per_sec": 0, 00:09:59.990 "w_mbytes_per_sec": 0 00:09:59.990 }, 00:09:59.990 "claimed": false, 00:09:59.990 "zoned": false, 00:09:59.990 "supported_io_types": { 00:09:59.990 "read": true, 00:09:59.990 "write": true, 00:09:59.990 "unmap": true, 00:09:59.990 "flush": true, 00:09:59.990 "reset": true, 00:09:59.990 "nvme_admin": false, 00:09:59.990 "nvme_io": false, 00:09:59.990 "nvme_io_md": false, 00:09:59.990 "write_zeroes": true, 00:09:59.990 "zcopy": true, 00:09:59.990 "get_zone_info": false, 00:09:59.990 "zone_management": false, 00:09:59.990 "zone_append": false, 00:09:59.990 "compare": false, 00:09:59.990 "compare_and_write": false, 00:09:59.990 "abort": true, 00:09:59.990 "seek_hole": false, 00:09:59.990 "seek_data": false, 00:09:59.990 "copy": true, 00:09:59.990 "nvme_iov_md": false 00:09:59.990 }, 00:09:59.990 "memory_domains": [ 00:09:59.990 { 00:09:59.990 "dma_device_id": "system", 00:09:59.990 "dma_device_type": 1 00:09:59.990 }, 00:09:59.990 { 00:09:59.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.990 "dma_device_type": 2 00:09:59.990 } 00:09:59.990 ], 00:09:59.990 "driver_specific": {} 00:09:59.990 } 00:09:59.990 ] 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.990 BaseBdev4 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.990 [ 00:09:59.990 { 00:09:59.990 "name": "BaseBdev4", 00:09:59.990 "aliases": [ 00:09:59.990 "34b1071f-be6c-45dc-81ae-cdab7becec70" 00:09:59.990 ], 00:09:59.990 "product_name": "Malloc disk", 00:09:59.990 "block_size": 512, 00:09:59.990 "num_blocks": 65536, 00:09:59.990 "uuid": "34b1071f-be6c-45dc-81ae-cdab7becec70", 00:09:59.990 "assigned_rate_limits": { 00:09:59.990 "rw_ios_per_sec": 0, 00:09:59.990 "rw_mbytes_per_sec": 0, 00:09:59.990 "r_mbytes_per_sec": 0, 00:09:59.990 "w_mbytes_per_sec": 0 00:09:59.990 }, 00:09:59.990 "claimed": false, 00:09:59.990 "zoned": false, 00:09:59.990 "supported_io_types": { 00:09:59.990 "read": true, 00:09:59.990 "write": true, 00:09:59.990 "unmap": true, 00:09:59.990 "flush": true, 00:09:59.990 "reset": true, 00:09:59.990 "nvme_admin": false, 00:09:59.990 "nvme_io": false, 00:09:59.990 "nvme_io_md": false, 00:09:59.990 "write_zeroes": true, 00:09:59.990 "zcopy": true, 00:09:59.990 "get_zone_info": false, 00:09:59.990 "zone_management": false, 00:09:59.990 "zone_append": false, 00:09:59.990 "compare": false, 00:09:59.990 "compare_and_write": false, 00:09:59.990 "abort": true, 00:09:59.990 "seek_hole": false, 00:09:59.990 "seek_data": false, 00:09:59.990 "copy": true, 00:09:59.990 "nvme_iov_md": false 00:09:59.990 }, 00:09:59.990 "memory_domains": [ 00:09:59.990 { 00:09:59.990 "dma_device_id": "system", 00:09:59.990 "dma_device_type": 1 00:09:59.990 }, 00:09:59.990 { 00:09:59.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.990 "dma_device_type": 2 00:09:59.990 } 00:09:59.990 ], 00:09:59.990 "driver_specific": {} 00:09:59.990 } 00:09:59.990 ] 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.990 [2024-12-07 16:35:58.777570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.990 [2024-12-07 16:35:58.777679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.990 [2024-12-07 16:35:58.777782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.990 [2024-12-07 16:35:58.780058] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.990 [2024-12-07 16:35:58.780159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.990 "name": "Existed_Raid", 00:09:59.990 "uuid": "5bde1240-097b-422e-9a7f-4965b33d6807", 00:09:59.990 "strip_size_kb": 64, 00:09:59.990 "state": "configuring", 00:09:59.990 "raid_level": "raid0", 00:09:59.990 "superblock": true, 00:09:59.990 "num_base_bdevs": 4, 00:09:59.990 "num_base_bdevs_discovered": 3, 00:09:59.990 "num_base_bdevs_operational": 4, 00:09:59.990 "base_bdevs_list": [ 00:09:59.990 { 00:09:59.990 "name": "BaseBdev1", 00:09:59.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.990 "is_configured": false, 00:09:59.990 "data_offset": 0, 00:09:59.990 "data_size": 0 00:09:59.990 }, 00:09:59.990 { 00:09:59.990 "name": "BaseBdev2", 00:09:59.990 "uuid": "1fd5d451-5295-4e56-9765-7aa25514a949", 00:09:59.990 "is_configured": true, 00:09:59.990 "data_offset": 2048, 00:09:59.990 "data_size": 63488 00:09:59.990 }, 00:09:59.990 { 00:09:59.990 "name": "BaseBdev3", 00:09:59.990 "uuid": "a86452a2-9d88-483c-a636-8f0d886cce03", 00:09:59.990 "is_configured": true, 00:09:59.990 "data_offset": 2048, 00:09:59.990 "data_size": 63488 00:09:59.990 }, 00:09:59.990 { 00:09:59.990 "name": "BaseBdev4", 00:09:59.990 "uuid": "34b1071f-be6c-45dc-81ae-cdab7becec70", 00:09:59.990 "is_configured": true, 00:09:59.990 "data_offset": 2048, 00:09:59.990 "data_size": 63488 00:09:59.990 } 00:09:59.990 ] 00:09:59.990 }' 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.990 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.561 [2024-12-07 16:35:59.256689] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.561 "name": "Existed_Raid", 00:10:00.561 "uuid": "5bde1240-097b-422e-9a7f-4965b33d6807", 00:10:00.561 "strip_size_kb": 64, 00:10:00.561 "state": "configuring", 00:10:00.561 "raid_level": "raid0", 00:10:00.561 "superblock": true, 00:10:00.561 "num_base_bdevs": 4, 00:10:00.561 "num_base_bdevs_discovered": 2, 00:10:00.561 "num_base_bdevs_operational": 4, 00:10:00.561 "base_bdevs_list": [ 00:10:00.561 { 00:10:00.561 "name": "BaseBdev1", 00:10:00.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.561 "is_configured": false, 00:10:00.561 "data_offset": 0, 00:10:00.561 "data_size": 0 00:10:00.561 }, 00:10:00.561 { 00:10:00.561 "name": null, 00:10:00.561 "uuid": "1fd5d451-5295-4e56-9765-7aa25514a949", 00:10:00.561 "is_configured": false, 00:10:00.561 "data_offset": 0, 00:10:00.561 "data_size": 63488 00:10:00.561 }, 00:10:00.561 { 00:10:00.561 "name": "BaseBdev3", 00:10:00.561 "uuid": "a86452a2-9d88-483c-a636-8f0d886cce03", 00:10:00.561 "is_configured": true, 00:10:00.561 "data_offset": 2048, 00:10:00.561 "data_size": 63488 00:10:00.561 }, 00:10:00.561 { 00:10:00.561 "name": "BaseBdev4", 00:10:00.561 "uuid": "34b1071f-be6c-45dc-81ae-cdab7becec70", 00:10:00.561 "is_configured": true, 00:10:00.561 "data_offset": 2048, 00:10:00.561 "data_size": 63488 00:10:00.561 } 00:10:00.561 ] 00:10:00.561 }' 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.561 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.821 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.821 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:00.821 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.821 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.081 BaseBdev1 00:10:01.081 [2024-12-07 16:35:59.777038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.081 [ 00:10:01.081 { 00:10:01.081 "name": "BaseBdev1", 00:10:01.081 "aliases": [ 00:10:01.081 "f38a5049-a573-47b4-b7c9-8a97d6c87e24" 00:10:01.081 ], 00:10:01.081 "product_name": "Malloc disk", 00:10:01.081 "block_size": 512, 00:10:01.081 "num_blocks": 65536, 00:10:01.081 "uuid": "f38a5049-a573-47b4-b7c9-8a97d6c87e24", 00:10:01.081 "assigned_rate_limits": { 00:10:01.081 "rw_ios_per_sec": 0, 00:10:01.081 "rw_mbytes_per_sec": 0, 00:10:01.081 "r_mbytes_per_sec": 0, 00:10:01.081 "w_mbytes_per_sec": 0 00:10:01.081 }, 00:10:01.081 "claimed": true, 00:10:01.081 "claim_type": "exclusive_write", 00:10:01.081 "zoned": false, 00:10:01.081 "supported_io_types": { 00:10:01.081 "read": true, 00:10:01.081 "write": true, 00:10:01.081 "unmap": true, 00:10:01.081 "flush": true, 00:10:01.081 "reset": true, 00:10:01.081 "nvme_admin": false, 00:10:01.081 "nvme_io": false, 00:10:01.081 "nvme_io_md": false, 00:10:01.081 "write_zeroes": true, 00:10:01.081 "zcopy": true, 00:10:01.081 "get_zone_info": false, 00:10:01.081 "zone_management": false, 00:10:01.081 "zone_append": false, 00:10:01.081 "compare": false, 00:10:01.081 "compare_and_write": false, 00:10:01.081 "abort": true, 00:10:01.081 "seek_hole": false, 00:10:01.081 "seek_data": false, 00:10:01.081 "copy": true, 00:10:01.081 "nvme_iov_md": false 00:10:01.081 }, 00:10:01.081 "memory_domains": [ 00:10:01.081 { 00:10:01.081 "dma_device_id": "system", 00:10:01.081 "dma_device_type": 1 00:10:01.081 }, 00:10:01.081 { 00:10:01.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.081 "dma_device_type": 2 00:10:01.081 } 00:10:01.081 ], 00:10:01.081 "driver_specific": {} 00:10:01.081 } 00:10:01.081 ] 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.081 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.082 "name": "Existed_Raid", 00:10:01.082 "uuid": "5bde1240-097b-422e-9a7f-4965b33d6807", 00:10:01.082 "strip_size_kb": 64, 00:10:01.082 "state": "configuring", 00:10:01.082 "raid_level": "raid0", 00:10:01.082 "superblock": true, 00:10:01.082 "num_base_bdevs": 4, 00:10:01.082 "num_base_bdevs_discovered": 3, 00:10:01.082 "num_base_bdevs_operational": 4, 00:10:01.082 "base_bdevs_list": [ 00:10:01.082 { 00:10:01.082 "name": "BaseBdev1", 00:10:01.082 "uuid": "f38a5049-a573-47b4-b7c9-8a97d6c87e24", 00:10:01.082 "is_configured": true, 00:10:01.082 "data_offset": 2048, 00:10:01.082 "data_size": 63488 00:10:01.082 }, 00:10:01.082 { 00:10:01.082 "name": null, 00:10:01.082 "uuid": "1fd5d451-5295-4e56-9765-7aa25514a949", 00:10:01.082 "is_configured": false, 00:10:01.082 "data_offset": 0, 00:10:01.082 "data_size": 63488 00:10:01.082 }, 00:10:01.082 { 00:10:01.082 "name": "BaseBdev3", 00:10:01.082 "uuid": "a86452a2-9d88-483c-a636-8f0d886cce03", 00:10:01.082 "is_configured": true, 00:10:01.082 "data_offset": 2048, 00:10:01.082 "data_size": 63488 00:10:01.082 }, 00:10:01.082 { 00:10:01.082 "name": "BaseBdev4", 00:10:01.082 "uuid": "34b1071f-be6c-45dc-81ae-cdab7becec70", 00:10:01.082 "is_configured": true, 00:10:01.082 "data_offset": 2048, 00:10:01.082 "data_size": 63488 00:10:01.082 } 00:10:01.082 ] 00:10:01.082 }' 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.082 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.652 [2024-12-07 16:36:00.300165] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.652 "name": "Existed_Raid", 00:10:01.652 "uuid": "5bde1240-097b-422e-9a7f-4965b33d6807", 00:10:01.652 "strip_size_kb": 64, 00:10:01.652 "state": "configuring", 00:10:01.652 "raid_level": "raid0", 00:10:01.652 "superblock": true, 00:10:01.652 "num_base_bdevs": 4, 00:10:01.652 "num_base_bdevs_discovered": 2, 00:10:01.652 "num_base_bdevs_operational": 4, 00:10:01.652 "base_bdevs_list": [ 00:10:01.652 { 00:10:01.652 "name": "BaseBdev1", 00:10:01.652 "uuid": "f38a5049-a573-47b4-b7c9-8a97d6c87e24", 00:10:01.652 "is_configured": true, 00:10:01.652 "data_offset": 2048, 00:10:01.652 "data_size": 63488 00:10:01.652 }, 00:10:01.652 { 00:10:01.652 "name": null, 00:10:01.652 "uuid": "1fd5d451-5295-4e56-9765-7aa25514a949", 00:10:01.652 "is_configured": false, 00:10:01.652 "data_offset": 0, 00:10:01.652 "data_size": 63488 00:10:01.652 }, 00:10:01.652 { 00:10:01.652 "name": null, 00:10:01.652 "uuid": "a86452a2-9d88-483c-a636-8f0d886cce03", 00:10:01.652 "is_configured": false, 00:10:01.652 "data_offset": 0, 00:10:01.652 "data_size": 63488 00:10:01.652 }, 00:10:01.652 { 00:10:01.652 "name": "BaseBdev4", 00:10:01.652 "uuid": "34b1071f-be6c-45dc-81ae-cdab7becec70", 00:10:01.652 "is_configured": true, 00:10:01.652 "data_offset": 2048, 00:10:01.652 "data_size": 63488 00:10:01.652 } 00:10:01.652 ] 00:10:01.652 }' 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.652 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.913 [2024-12-07 16:36:00.791392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.913 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.172 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.172 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.172 "name": "Existed_Raid", 00:10:02.172 "uuid": "5bde1240-097b-422e-9a7f-4965b33d6807", 00:10:02.172 "strip_size_kb": 64, 00:10:02.172 "state": "configuring", 00:10:02.172 "raid_level": "raid0", 00:10:02.172 "superblock": true, 00:10:02.172 "num_base_bdevs": 4, 00:10:02.172 "num_base_bdevs_discovered": 3, 00:10:02.172 "num_base_bdevs_operational": 4, 00:10:02.172 "base_bdevs_list": [ 00:10:02.172 { 00:10:02.172 "name": "BaseBdev1", 00:10:02.172 "uuid": "f38a5049-a573-47b4-b7c9-8a97d6c87e24", 00:10:02.172 "is_configured": true, 00:10:02.173 "data_offset": 2048, 00:10:02.173 "data_size": 63488 00:10:02.173 }, 00:10:02.173 { 00:10:02.173 "name": null, 00:10:02.173 "uuid": "1fd5d451-5295-4e56-9765-7aa25514a949", 00:10:02.173 "is_configured": false, 00:10:02.173 "data_offset": 0, 00:10:02.173 "data_size": 63488 00:10:02.173 }, 00:10:02.173 { 00:10:02.173 "name": "BaseBdev3", 00:10:02.173 "uuid": "a86452a2-9d88-483c-a636-8f0d886cce03", 00:10:02.173 "is_configured": true, 00:10:02.173 "data_offset": 2048, 00:10:02.173 "data_size": 63488 00:10:02.173 }, 00:10:02.173 { 00:10:02.173 "name": "BaseBdev4", 00:10:02.173 "uuid": "34b1071f-be6c-45dc-81ae-cdab7becec70", 00:10:02.173 "is_configured": true, 00:10:02.173 "data_offset": 2048, 00:10:02.173 "data_size": 63488 00:10:02.173 } 00:10:02.173 ] 00:10:02.173 }' 00:10:02.173 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.173 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.431 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.431 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.431 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.431 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.431 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.431 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:02.431 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.431 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.431 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.432 [2024-12-07 16:36:01.294565] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.432 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.690 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.690 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.690 "name": "Existed_Raid", 00:10:02.690 "uuid": "5bde1240-097b-422e-9a7f-4965b33d6807", 00:10:02.690 "strip_size_kb": 64, 00:10:02.690 "state": "configuring", 00:10:02.690 "raid_level": "raid0", 00:10:02.690 "superblock": true, 00:10:02.690 "num_base_bdevs": 4, 00:10:02.690 "num_base_bdevs_discovered": 2, 00:10:02.690 "num_base_bdevs_operational": 4, 00:10:02.690 "base_bdevs_list": [ 00:10:02.690 { 00:10:02.690 "name": null, 00:10:02.690 "uuid": "f38a5049-a573-47b4-b7c9-8a97d6c87e24", 00:10:02.690 "is_configured": false, 00:10:02.690 "data_offset": 0, 00:10:02.690 "data_size": 63488 00:10:02.690 }, 00:10:02.690 { 00:10:02.690 "name": null, 00:10:02.690 "uuid": "1fd5d451-5295-4e56-9765-7aa25514a949", 00:10:02.690 "is_configured": false, 00:10:02.690 "data_offset": 0, 00:10:02.690 "data_size": 63488 00:10:02.690 }, 00:10:02.690 { 00:10:02.690 "name": "BaseBdev3", 00:10:02.690 "uuid": "a86452a2-9d88-483c-a636-8f0d886cce03", 00:10:02.690 "is_configured": true, 00:10:02.690 "data_offset": 2048, 00:10:02.690 "data_size": 63488 00:10:02.690 }, 00:10:02.690 { 00:10:02.690 "name": "BaseBdev4", 00:10:02.690 "uuid": "34b1071f-be6c-45dc-81ae-cdab7becec70", 00:10:02.690 "is_configured": true, 00:10:02.690 "data_offset": 2048, 00:10:02.690 "data_size": 63488 00:10:02.690 } 00:10:02.690 ] 00:10:02.690 }' 00:10:02.690 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.690 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.950 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:02.950 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.951 [2024-12-07 16:36:01.813727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.951 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.210 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.210 "name": "Existed_Raid", 00:10:03.210 "uuid": "5bde1240-097b-422e-9a7f-4965b33d6807", 00:10:03.210 "strip_size_kb": 64, 00:10:03.210 "state": "configuring", 00:10:03.210 "raid_level": "raid0", 00:10:03.210 "superblock": true, 00:10:03.210 "num_base_bdevs": 4, 00:10:03.210 "num_base_bdevs_discovered": 3, 00:10:03.210 "num_base_bdevs_operational": 4, 00:10:03.210 "base_bdevs_list": [ 00:10:03.210 { 00:10:03.210 "name": null, 00:10:03.210 "uuid": "f38a5049-a573-47b4-b7c9-8a97d6c87e24", 00:10:03.210 "is_configured": false, 00:10:03.210 "data_offset": 0, 00:10:03.210 "data_size": 63488 00:10:03.210 }, 00:10:03.210 { 00:10:03.210 "name": "BaseBdev2", 00:10:03.210 "uuid": "1fd5d451-5295-4e56-9765-7aa25514a949", 00:10:03.210 "is_configured": true, 00:10:03.210 "data_offset": 2048, 00:10:03.210 "data_size": 63488 00:10:03.210 }, 00:10:03.210 { 00:10:03.210 "name": "BaseBdev3", 00:10:03.210 "uuid": "a86452a2-9d88-483c-a636-8f0d886cce03", 00:10:03.210 "is_configured": true, 00:10:03.210 "data_offset": 2048, 00:10:03.210 "data_size": 63488 00:10:03.210 }, 00:10:03.210 { 00:10:03.210 "name": "BaseBdev4", 00:10:03.210 "uuid": "34b1071f-be6c-45dc-81ae-cdab7becec70", 00:10:03.210 "is_configured": true, 00:10:03.210 "data_offset": 2048, 00:10:03.210 "data_size": 63488 00:10:03.210 } 00:10:03.210 ] 00:10:03.210 }' 00:10:03.210 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.210 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f38a5049-a573-47b4-b7c9-8a97d6c87e24 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.469 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.731 [2024-12-07 16:36:02.381899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:03.731 [2024-12-07 16:36:02.382218] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:03.731 [2024-12-07 16:36:02.382271] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:03.731 [2024-12-07 16:36:02.382630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:03.731 [2024-12-07 16:36:02.382827] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:03.731 [2024-12-07 16:36:02.382876] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:03.731 NewBaseBdev 00:10:03.731 [2024-12-07 16:36:02.383042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.731 [ 00:10:03.731 { 00:10:03.731 "name": "NewBaseBdev", 00:10:03.731 "aliases": [ 00:10:03.731 "f38a5049-a573-47b4-b7c9-8a97d6c87e24" 00:10:03.731 ], 00:10:03.731 "product_name": "Malloc disk", 00:10:03.731 "block_size": 512, 00:10:03.731 "num_blocks": 65536, 00:10:03.731 "uuid": "f38a5049-a573-47b4-b7c9-8a97d6c87e24", 00:10:03.731 "assigned_rate_limits": { 00:10:03.731 "rw_ios_per_sec": 0, 00:10:03.731 "rw_mbytes_per_sec": 0, 00:10:03.731 "r_mbytes_per_sec": 0, 00:10:03.731 "w_mbytes_per_sec": 0 00:10:03.731 }, 00:10:03.731 "claimed": true, 00:10:03.731 "claim_type": "exclusive_write", 00:10:03.731 "zoned": false, 00:10:03.731 "supported_io_types": { 00:10:03.731 "read": true, 00:10:03.731 "write": true, 00:10:03.731 "unmap": true, 00:10:03.731 "flush": true, 00:10:03.731 "reset": true, 00:10:03.731 "nvme_admin": false, 00:10:03.731 "nvme_io": false, 00:10:03.731 "nvme_io_md": false, 00:10:03.731 "write_zeroes": true, 00:10:03.731 "zcopy": true, 00:10:03.731 "get_zone_info": false, 00:10:03.731 "zone_management": false, 00:10:03.731 "zone_append": false, 00:10:03.731 "compare": false, 00:10:03.731 "compare_and_write": false, 00:10:03.731 "abort": true, 00:10:03.731 "seek_hole": false, 00:10:03.731 "seek_data": false, 00:10:03.731 "copy": true, 00:10:03.731 "nvme_iov_md": false 00:10:03.731 }, 00:10:03.731 "memory_domains": [ 00:10:03.731 { 00:10:03.731 "dma_device_id": "system", 00:10:03.731 "dma_device_type": 1 00:10:03.731 }, 00:10:03.731 { 00:10:03.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.731 "dma_device_type": 2 00:10:03.731 } 00:10:03.731 ], 00:10:03.731 "driver_specific": {} 00:10:03.731 } 00:10:03.731 ] 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.731 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.732 "name": "Existed_Raid", 00:10:03.732 "uuid": "5bde1240-097b-422e-9a7f-4965b33d6807", 00:10:03.732 "strip_size_kb": 64, 00:10:03.732 "state": "online", 00:10:03.732 "raid_level": "raid0", 00:10:03.732 "superblock": true, 00:10:03.732 "num_base_bdevs": 4, 00:10:03.732 "num_base_bdevs_discovered": 4, 00:10:03.732 "num_base_bdevs_operational": 4, 00:10:03.732 "base_bdevs_list": [ 00:10:03.732 { 00:10:03.732 "name": "NewBaseBdev", 00:10:03.732 "uuid": "f38a5049-a573-47b4-b7c9-8a97d6c87e24", 00:10:03.732 "is_configured": true, 00:10:03.732 "data_offset": 2048, 00:10:03.732 "data_size": 63488 00:10:03.732 }, 00:10:03.732 { 00:10:03.732 "name": "BaseBdev2", 00:10:03.732 "uuid": "1fd5d451-5295-4e56-9765-7aa25514a949", 00:10:03.732 "is_configured": true, 00:10:03.732 "data_offset": 2048, 00:10:03.732 "data_size": 63488 00:10:03.732 }, 00:10:03.732 { 00:10:03.732 "name": "BaseBdev3", 00:10:03.732 "uuid": "a86452a2-9d88-483c-a636-8f0d886cce03", 00:10:03.732 "is_configured": true, 00:10:03.732 "data_offset": 2048, 00:10:03.732 "data_size": 63488 00:10:03.732 }, 00:10:03.732 { 00:10:03.732 "name": "BaseBdev4", 00:10:03.732 "uuid": "34b1071f-be6c-45dc-81ae-cdab7becec70", 00:10:03.732 "is_configured": true, 00:10:03.732 "data_offset": 2048, 00:10:03.732 "data_size": 63488 00:10:03.732 } 00:10:03.732 ] 00:10:03.732 }' 00:10:03.732 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.732 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.994 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:03.994 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:03.994 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.994 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.994 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.994 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.994 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.994 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.273 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.273 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.273 [2024-12-07 16:36:02.897467] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.273 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.273 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.273 "name": "Existed_Raid", 00:10:04.273 "aliases": [ 00:10:04.273 "5bde1240-097b-422e-9a7f-4965b33d6807" 00:10:04.273 ], 00:10:04.273 "product_name": "Raid Volume", 00:10:04.273 "block_size": 512, 00:10:04.273 "num_blocks": 253952, 00:10:04.273 "uuid": "5bde1240-097b-422e-9a7f-4965b33d6807", 00:10:04.273 "assigned_rate_limits": { 00:10:04.273 "rw_ios_per_sec": 0, 00:10:04.273 "rw_mbytes_per_sec": 0, 00:10:04.273 "r_mbytes_per_sec": 0, 00:10:04.273 "w_mbytes_per_sec": 0 00:10:04.273 }, 00:10:04.273 "claimed": false, 00:10:04.273 "zoned": false, 00:10:04.273 "supported_io_types": { 00:10:04.273 "read": true, 00:10:04.273 "write": true, 00:10:04.273 "unmap": true, 00:10:04.273 "flush": true, 00:10:04.273 "reset": true, 00:10:04.273 "nvme_admin": false, 00:10:04.273 "nvme_io": false, 00:10:04.273 "nvme_io_md": false, 00:10:04.273 "write_zeroes": true, 00:10:04.273 "zcopy": false, 00:10:04.273 "get_zone_info": false, 00:10:04.273 "zone_management": false, 00:10:04.273 "zone_append": false, 00:10:04.273 "compare": false, 00:10:04.273 "compare_and_write": false, 00:10:04.273 "abort": false, 00:10:04.273 "seek_hole": false, 00:10:04.273 "seek_data": false, 00:10:04.273 "copy": false, 00:10:04.273 "nvme_iov_md": false 00:10:04.273 }, 00:10:04.273 "memory_domains": [ 00:10:04.273 { 00:10:04.273 "dma_device_id": "system", 00:10:04.273 "dma_device_type": 1 00:10:04.273 }, 00:10:04.273 { 00:10:04.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.273 "dma_device_type": 2 00:10:04.273 }, 00:10:04.273 { 00:10:04.273 "dma_device_id": "system", 00:10:04.273 "dma_device_type": 1 00:10:04.273 }, 00:10:04.273 { 00:10:04.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.273 "dma_device_type": 2 00:10:04.273 }, 00:10:04.273 { 00:10:04.273 "dma_device_id": "system", 00:10:04.273 "dma_device_type": 1 00:10:04.273 }, 00:10:04.273 { 00:10:04.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.273 "dma_device_type": 2 00:10:04.273 }, 00:10:04.273 { 00:10:04.273 "dma_device_id": "system", 00:10:04.273 "dma_device_type": 1 00:10:04.273 }, 00:10:04.273 { 00:10:04.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.273 "dma_device_type": 2 00:10:04.273 } 00:10:04.273 ], 00:10:04.273 "driver_specific": { 00:10:04.273 "raid": { 00:10:04.273 "uuid": "5bde1240-097b-422e-9a7f-4965b33d6807", 00:10:04.273 "strip_size_kb": 64, 00:10:04.273 "state": "online", 00:10:04.273 "raid_level": "raid0", 00:10:04.273 "superblock": true, 00:10:04.273 "num_base_bdevs": 4, 00:10:04.273 "num_base_bdevs_discovered": 4, 00:10:04.273 "num_base_bdevs_operational": 4, 00:10:04.273 "base_bdevs_list": [ 00:10:04.273 { 00:10:04.273 "name": "NewBaseBdev", 00:10:04.273 "uuid": "f38a5049-a573-47b4-b7c9-8a97d6c87e24", 00:10:04.273 "is_configured": true, 00:10:04.273 "data_offset": 2048, 00:10:04.273 "data_size": 63488 00:10:04.273 }, 00:10:04.273 { 00:10:04.273 "name": "BaseBdev2", 00:10:04.274 "uuid": "1fd5d451-5295-4e56-9765-7aa25514a949", 00:10:04.274 "is_configured": true, 00:10:04.274 "data_offset": 2048, 00:10:04.274 "data_size": 63488 00:10:04.274 }, 00:10:04.274 { 00:10:04.274 "name": "BaseBdev3", 00:10:04.274 "uuid": "a86452a2-9d88-483c-a636-8f0d886cce03", 00:10:04.274 "is_configured": true, 00:10:04.274 "data_offset": 2048, 00:10:04.274 "data_size": 63488 00:10:04.274 }, 00:10:04.274 { 00:10:04.274 "name": "BaseBdev4", 00:10:04.274 "uuid": "34b1071f-be6c-45dc-81ae-cdab7becec70", 00:10:04.274 "is_configured": true, 00:10:04.274 "data_offset": 2048, 00:10:04.274 "data_size": 63488 00:10:04.274 } 00:10:04.274 ] 00:10:04.274 } 00:10:04.274 } 00:10:04.274 }' 00:10:04.274 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.274 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:04.274 BaseBdev2 00:10:04.274 BaseBdev3 00:10:04.274 BaseBdev4' 00:10:04.274 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.274 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.533 [2024-12-07 16:36:03.240433] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.533 [2024-12-07 16:36:03.240468] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.533 [2024-12-07 16:36:03.240554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.533 [2024-12-07 16:36:03.240634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.533 [2024-12-07 16:36:03.240644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81301 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81301 ']' 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81301 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81301 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81301' 00:10:04.533 killing process with pid 81301 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81301 00:10:04.533 [2024-12-07 16:36:03.290168] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.533 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81301 00:10:04.533 [2024-12-07 16:36:03.366987] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:05.104 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:05.104 00:10:05.104 real 0m9.987s 00:10:05.104 user 0m16.685s 00:10:05.104 sys 0m2.198s 00:10:05.104 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.104 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.104 ************************************ 00:10:05.104 END TEST raid_state_function_test_sb 00:10:05.104 ************************************ 00:10:05.104 16:36:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:05.104 16:36:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:05.104 16:36:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.104 16:36:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:05.104 ************************************ 00:10:05.104 START TEST raid_superblock_test 00:10:05.104 ************************************ 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81955 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81955 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81955 ']' 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.104 16:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.104 [2024-12-07 16:36:03.881168] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:05.105 [2024-12-07 16:36:03.881387] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81955 ] 00:10:05.364 [2024-12-07 16:36:04.021586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.364 [2024-12-07 16:36:04.104203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.364 [2024-12-07 16:36:04.184310] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.364 [2024-12-07 16:36:04.184446] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.931 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.931 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:05.931 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:05.931 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:05.931 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:05.931 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:05.931 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:05.931 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.932 malloc1 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.932 [2024-12-07 16:36:04.801483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:05.932 [2024-12-07 16:36:04.801611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.932 [2024-12-07 16:36:04.801655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:05.932 [2024-12-07 16:36:04.801695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.932 [2024-12-07 16:36:04.804270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.932 [2024-12-07 16:36:04.804366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:05.932 pt1 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.932 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.192 malloc2 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.192 [2024-12-07 16:36:04.848808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:06.192 [2024-12-07 16:36:04.848939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.192 [2024-12-07 16:36:04.848986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:06.192 [2024-12-07 16:36:04.849030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.192 [2024-12-07 16:36:04.851920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.192 [2024-12-07 16:36:04.851991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:06.192 pt2 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.192 malloc3 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.192 [2024-12-07 16:36:04.888125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:06.192 [2024-12-07 16:36:04.888230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.192 [2024-12-07 16:36:04.888268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:06.192 [2024-12-07 16:36:04.888302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.192 [2024-12-07 16:36:04.890650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.192 [2024-12-07 16:36:04.890722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:06.192 pt3 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.192 malloc4 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.192 [2024-12-07 16:36:04.927780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:06.192 [2024-12-07 16:36:04.927881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.192 [2024-12-07 16:36:04.927905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:06.192 [2024-12-07 16:36:04.927920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.192 [2024-12-07 16:36:04.930405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.192 [2024-12-07 16:36:04.930439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:06.192 pt4 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.192 [2024-12-07 16:36:04.939856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:06.192 [2024-12-07 16:36:04.942049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:06.192 [2024-12-07 16:36:04.942110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:06.192 [2024-12-07 16:36:04.942170] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:06.192 [2024-12-07 16:36:04.942327] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:06.192 [2024-12-07 16:36:04.942351] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:06.192 [2024-12-07 16:36:04.942650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:06.192 [2024-12-07 16:36:04.942811] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:06.192 [2024-12-07 16:36:04.942822] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:06.192 [2024-12-07 16:36:04.943005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.192 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.193 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.193 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.193 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.193 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.193 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.193 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.193 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.193 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.193 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.193 "name": "raid_bdev1", 00:10:06.193 "uuid": "385e17b9-720e-4635-bfeb-d58d1d661e40", 00:10:06.193 "strip_size_kb": 64, 00:10:06.193 "state": "online", 00:10:06.193 "raid_level": "raid0", 00:10:06.193 "superblock": true, 00:10:06.193 "num_base_bdevs": 4, 00:10:06.193 "num_base_bdevs_discovered": 4, 00:10:06.193 "num_base_bdevs_operational": 4, 00:10:06.193 "base_bdevs_list": [ 00:10:06.193 { 00:10:06.193 "name": "pt1", 00:10:06.193 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.193 "is_configured": true, 00:10:06.193 "data_offset": 2048, 00:10:06.193 "data_size": 63488 00:10:06.193 }, 00:10:06.193 { 00:10:06.193 "name": "pt2", 00:10:06.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.193 "is_configured": true, 00:10:06.193 "data_offset": 2048, 00:10:06.193 "data_size": 63488 00:10:06.193 }, 00:10:06.193 { 00:10:06.193 "name": "pt3", 00:10:06.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.193 "is_configured": true, 00:10:06.193 "data_offset": 2048, 00:10:06.193 "data_size": 63488 00:10:06.193 }, 00:10:06.193 { 00:10:06.193 "name": "pt4", 00:10:06.193 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:06.193 "is_configured": true, 00:10:06.193 "data_offset": 2048, 00:10:06.193 "data_size": 63488 00:10:06.193 } 00:10:06.193 ] 00:10:06.193 }' 00:10:06.193 16:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.193 16:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.759 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:06.759 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:06.759 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.759 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.759 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.759 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.759 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:06.759 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.759 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.759 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.759 [2024-12-07 16:36:05.415454] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.759 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.759 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.759 "name": "raid_bdev1", 00:10:06.759 "aliases": [ 00:10:06.759 "385e17b9-720e-4635-bfeb-d58d1d661e40" 00:10:06.759 ], 00:10:06.759 "product_name": "Raid Volume", 00:10:06.759 "block_size": 512, 00:10:06.759 "num_blocks": 253952, 00:10:06.759 "uuid": "385e17b9-720e-4635-bfeb-d58d1d661e40", 00:10:06.759 "assigned_rate_limits": { 00:10:06.759 "rw_ios_per_sec": 0, 00:10:06.759 "rw_mbytes_per_sec": 0, 00:10:06.759 "r_mbytes_per_sec": 0, 00:10:06.759 "w_mbytes_per_sec": 0 00:10:06.759 }, 00:10:06.759 "claimed": false, 00:10:06.759 "zoned": false, 00:10:06.759 "supported_io_types": { 00:10:06.759 "read": true, 00:10:06.759 "write": true, 00:10:06.759 "unmap": true, 00:10:06.759 "flush": true, 00:10:06.759 "reset": true, 00:10:06.759 "nvme_admin": false, 00:10:06.759 "nvme_io": false, 00:10:06.759 "nvme_io_md": false, 00:10:06.759 "write_zeroes": true, 00:10:06.759 "zcopy": false, 00:10:06.759 "get_zone_info": false, 00:10:06.759 "zone_management": false, 00:10:06.759 "zone_append": false, 00:10:06.759 "compare": false, 00:10:06.759 "compare_and_write": false, 00:10:06.759 "abort": false, 00:10:06.759 "seek_hole": false, 00:10:06.759 "seek_data": false, 00:10:06.759 "copy": false, 00:10:06.759 "nvme_iov_md": false 00:10:06.759 }, 00:10:06.759 "memory_domains": [ 00:10:06.759 { 00:10:06.759 "dma_device_id": "system", 00:10:06.759 "dma_device_type": 1 00:10:06.759 }, 00:10:06.759 { 00:10:06.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.759 "dma_device_type": 2 00:10:06.759 }, 00:10:06.759 { 00:10:06.759 "dma_device_id": "system", 00:10:06.759 "dma_device_type": 1 00:10:06.759 }, 00:10:06.759 { 00:10:06.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.759 "dma_device_type": 2 00:10:06.759 }, 00:10:06.759 { 00:10:06.759 "dma_device_id": "system", 00:10:06.759 "dma_device_type": 1 00:10:06.759 }, 00:10:06.759 { 00:10:06.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.759 "dma_device_type": 2 00:10:06.759 }, 00:10:06.759 { 00:10:06.759 "dma_device_id": "system", 00:10:06.759 "dma_device_type": 1 00:10:06.759 }, 00:10:06.759 { 00:10:06.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.759 "dma_device_type": 2 00:10:06.759 } 00:10:06.759 ], 00:10:06.759 "driver_specific": { 00:10:06.759 "raid": { 00:10:06.759 "uuid": "385e17b9-720e-4635-bfeb-d58d1d661e40", 00:10:06.759 "strip_size_kb": 64, 00:10:06.759 "state": "online", 00:10:06.759 "raid_level": "raid0", 00:10:06.759 "superblock": true, 00:10:06.760 "num_base_bdevs": 4, 00:10:06.760 "num_base_bdevs_discovered": 4, 00:10:06.760 "num_base_bdevs_operational": 4, 00:10:06.760 "base_bdevs_list": [ 00:10:06.760 { 00:10:06.760 "name": "pt1", 00:10:06.760 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.760 "is_configured": true, 00:10:06.760 "data_offset": 2048, 00:10:06.760 "data_size": 63488 00:10:06.760 }, 00:10:06.760 { 00:10:06.760 "name": "pt2", 00:10:06.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.760 "is_configured": true, 00:10:06.760 "data_offset": 2048, 00:10:06.760 "data_size": 63488 00:10:06.760 }, 00:10:06.760 { 00:10:06.760 "name": "pt3", 00:10:06.760 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.760 "is_configured": true, 00:10:06.760 "data_offset": 2048, 00:10:06.760 "data_size": 63488 00:10:06.760 }, 00:10:06.760 { 00:10:06.760 "name": "pt4", 00:10:06.760 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:06.760 "is_configured": true, 00:10:06.760 "data_offset": 2048, 00:10:06.760 "data_size": 63488 00:10:06.760 } 00:10:06.760 ] 00:10:06.760 } 00:10:06.760 } 00:10:06.760 }' 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:06.760 pt2 00:10:06.760 pt3 00:10:06.760 pt4' 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.760 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.019 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:07.020 [2024-12-07 16:36:05.710812] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=385e17b9-720e-4635-bfeb-d58d1d661e40 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 385e17b9-720e-4635-bfeb-d58d1d661e40 ']' 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.020 [2024-12-07 16:36:05.758416] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.020 [2024-12-07 16:36:05.758452] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.020 [2024-12-07 16:36:05.758532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.020 [2024-12-07 16:36:05.758609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.020 [2024-12-07 16:36:05.758619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.020 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.280 [2024-12-07 16:36:05.926289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:07.280 [2024-12-07 16:36:05.928699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:07.280 [2024-12-07 16:36:05.928757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:07.280 [2024-12-07 16:36:05.928787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:07.280 [2024-12-07 16:36:05.928844] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:07.280 [2024-12-07 16:36:05.928914] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:07.280 [2024-12-07 16:36:05.928937] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:07.280 [2024-12-07 16:36:05.928954] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:07.280 [2024-12-07 16:36:05.928970] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.280 [2024-12-07 16:36:05.928980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:07.280 request: 00:10:07.280 { 00:10:07.280 "name": "raid_bdev1", 00:10:07.280 "raid_level": "raid0", 00:10:07.280 "base_bdevs": [ 00:10:07.280 "malloc1", 00:10:07.280 "malloc2", 00:10:07.280 "malloc3", 00:10:07.280 "malloc4" 00:10:07.280 ], 00:10:07.280 "strip_size_kb": 64, 00:10:07.280 "superblock": false, 00:10:07.280 "method": "bdev_raid_create", 00:10:07.280 "req_id": 1 00:10:07.280 } 00:10:07.280 Got JSON-RPC error response 00:10:07.280 response: 00:10:07.280 { 00:10:07.280 "code": -17, 00:10:07.280 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:07.280 } 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.280 [2024-12-07 16:36:05.978104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:07.280 [2024-12-07 16:36:05.978244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.280 [2024-12-07 16:36:05.978289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:07.280 [2024-12-07 16:36:05.978321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.280 [2024-12-07 16:36:05.981005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.280 [2024-12-07 16:36:05.981081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:07.280 [2024-12-07 16:36:05.981219] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:07.280 [2024-12-07 16:36:05.981305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:07.280 pt1 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.280 16:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.280 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.280 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.280 "name": "raid_bdev1", 00:10:07.280 "uuid": "385e17b9-720e-4635-bfeb-d58d1d661e40", 00:10:07.280 "strip_size_kb": 64, 00:10:07.280 "state": "configuring", 00:10:07.280 "raid_level": "raid0", 00:10:07.280 "superblock": true, 00:10:07.280 "num_base_bdevs": 4, 00:10:07.280 "num_base_bdevs_discovered": 1, 00:10:07.280 "num_base_bdevs_operational": 4, 00:10:07.280 "base_bdevs_list": [ 00:10:07.280 { 00:10:07.280 "name": "pt1", 00:10:07.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:07.280 "is_configured": true, 00:10:07.280 "data_offset": 2048, 00:10:07.280 "data_size": 63488 00:10:07.280 }, 00:10:07.280 { 00:10:07.280 "name": null, 00:10:07.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.280 "is_configured": false, 00:10:07.280 "data_offset": 2048, 00:10:07.280 "data_size": 63488 00:10:07.280 }, 00:10:07.280 { 00:10:07.280 "name": null, 00:10:07.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.280 "is_configured": false, 00:10:07.280 "data_offset": 2048, 00:10:07.280 "data_size": 63488 00:10:07.280 }, 00:10:07.280 { 00:10:07.280 "name": null, 00:10:07.280 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:07.280 "is_configured": false, 00:10:07.280 "data_offset": 2048, 00:10:07.280 "data_size": 63488 00:10:07.280 } 00:10:07.280 ] 00:10:07.280 }' 00:10:07.280 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.280 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.540 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:07.540 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:07.540 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.540 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.799 [2024-12-07 16:36:06.441326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:07.799 [2024-12-07 16:36:06.441492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.800 [2024-12-07 16:36:06.441538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:07.800 [2024-12-07 16:36:06.441566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.800 [2024-12-07 16:36:06.442086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.800 [2024-12-07 16:36:06.442107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:07.800 [2024-12-07 16:36:06.442207] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:07.800 [2024-12-07 16:36:06.442236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:07.800 pt2 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.800 [2024-12-07 16:36:06.453268] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.800 "name": "raid_bdev1", 00:10:07.800 "uuid": "385e17b9-720e-4635-bfeb-d58d1d661e40", 00:10:07.800 "strip_size_kb": 64, 00:10:07.800 "state": "configuring", 00:10:07.800 "raid_level": "raid0", 00:10:07.800 "superblock": true, 00:10:07.800 "num_base_bdevs": 4, 00:10:07.800 "num_base_bdevs_discovered": 1, 00:10:07.800 "num_base_bdevs_operational": 4, 00:10:07.800 "base_bdevs_list": [ 00:10:07.800 { 00:10:07.800 "name": "pt1", 00:10:07.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:07.800 "is_configured": true, 00:10:07.800 "data_offset": 2048, 00:10:07.800 "data_size": 63488 00:10:07.800 }, 00:10:07.800 { 00:10:07.800 "name": null, 00:10:07.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.800 "is_configured": false, 00:10:07.800 "data_offset": 0, 00:10:07.800 "data_size": 63488 00:10:07.800 }, 00:10:07.800 { 00:10:07.800 "name": null, 00:10:07.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.800 "is_configured": false, 00:10:07.800 "data_offset": 2048, 00:10:07.800 "data_size": 63488 00:10:07.800 }, 00:10:07.800 { 00:10:07.800 "name": null, 00:10:07.800 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:07.800 "is_configured": false, 00:10:07.800 "data_offset": 2048, 00:10:07.800 "data_size": 63488 00:10:07.800 } 00:10:07.800 ] 00:10:07.800 }' 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.800 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.061 [2024-12-07 16:36:06.900538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:08.061 [2024-12-07 16:36:06.900685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.061 [2024-12-07 16:36:06.900725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:08.061 [2024-12-07 16:36:06.900756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.061 [2024-12-07 16:36:06.901278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.061 [2024-12-07 16:36:06.901355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:08.061 [2024-12-07 16:36:06.901489] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:08.061 [2024-12-07 16:36:06.901549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:08.061 pt2 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.061 [2024-12-07 16:36:06.912451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:08.061 [2024-12-07 16:36:06.912562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.061 [2024-12-07 16:36:06.912602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:08.061 [2024-12-07 16:36:06.912632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.061 [2024-12-07 16:36:06.913120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.061 [2024-12-07 16:36:06.913180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:08.061 [2024-12-07 16:36:06.913300] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:08.061 [2024-12-07 16:36:06.913364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:08.061 pt3 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.061 [2024-12-07 16:36:06.924439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:08.061 [2024-12-07 16:36:06.924538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.061 [2024-12-07 16:36:06.924576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:08.061 [2024-12-07 16:36:06.924605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.061 [2024-12-07 16:36:06.925076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.061 [2024-12-07 16:36:06.925133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:08.061 [2024-12-07 16:36:06.925242] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:08.061 [2024-12-07 16:36:06.925300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:08.061 [2024-12-07 16:36:06.925485] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:08.061 [2024-12-07 16:36:06.925529] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:08.061 [2024-12-07 16:36:06.925832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:08.061 [2024-12-07 16:36:06.926006] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:08.061 [2024-12-07 16:36:06.926044] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:08.061 [2024-12-07 16:36:06.926201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.061 pt4 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.061 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.321 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.321 "name": "raid_bdev1", 00:10:08.321 "uuid": "385e17b9-720e-4635-bfeb-d58d1d661e40", 00:10:08.321 "strip_size_kb": 64, 00:10:08.321 "state": "online", 00:10:08.321 "raid_level": "raid0", 00:10:08.321 "superblock": true, 00:10:08.321 "num_base_bdevs": 4, 00:10:08.321 "num_base_bdevs_discovered": 4, 00:10:08.321 "num_base_bdevs_operational": 4, 00:10:08.321 "base_bdevs_list": [ 00:10:08.321 { 00:10:08.321 "name": "pt1", 00:10:08.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.321 "is_configured": true, 00:10:08.321 "data_offset": 2048, 00:10:08.321 "data_size": 63488 00:10:08.321 }, 00:10:08.321 { 00:10:08.321 "name": "pt2", 00:10:08.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.321 "is_configured": true, 00:10:08.321 "data_offset": 2048, 00:10:08.321 "data_size": 63488 00:10:08.321 }, 00:10:08.321 { 00:10:08.321 "name": "pt3", 00:10:08.321 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.321 "is_configured": true, 00:10:08.321 "data_offset": 2048, 00:10:08.321 "data_size": 63488 00:10:08.321 }, 00:10:08.321 { 00:10:08.321 "name": "pt4", 00:10:08.321 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:08.321 "is_configured": true, 00:10:08.321 "data_offset": 2048, 00:10:08.321 "data_size": 63488 00:10:08.321 } 00:10:08.321 ] 00:10:08.321 }' 00:10:08.321 16:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.321 16:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.581 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:08.581 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:08.581 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.581 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.581 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.581 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.581 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.581 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:08.581 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.581 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.581 [2024-12-07 16:36:07.403997] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.581 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.581 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.581 "name": "raid_bdev1", 00:10:08.581 "aliases": [ 00:10:08.581 "385e17b9-720e-4635-bfeb-d58d1d661e40" 00:10:08.581 ], 00:10:08.581 "product_name": "Raid Volume", 00:10:08.581 "block_size": 512, 00:10:08.581 "num_blocks": 253952, 00:10:08.581 "uuid": "385e17b9-720e-4635-bfeb-d58d1d661e40", 00:10:08.581 "assigned_rate_limits": { 00:10:08.581 "rw_ios_per_sec": 0, 00:10:08.581 "rw_mbytes_per_sec": 0, 00:10:08.581 "r_mbytes_per_sec": 0, 00:10:08.581 "w_mbytes_per_sec": 0 00:10:08.581 }, 00:10:08.581 "claimed": false, 00:10:08.581 "zoned": false, 00:10:08.581 "supported_io_types": { 00:10:08.581 "read": true, 00:10:08.581 "write": true, 00:10:08.581 "unmap": true, 00:10:08.581 "flush": true, 00:10:08.581 "reset": true, 00:10:08.581 "nvme_admin": false, 00:10:08.581 "nvme_io": false, 00:10:08.581 "nvme_io_md": false, 00:10:08.581 "write_zeroes": true, 00:10:08.581 "zcopy": false, 00:10:08.581 "get_zone_info": false, 00:10:08.581 "zone_management": false, 00:10:08.581 "zone_append": false, 00:10:08.581 "compare": false, 00:10:08.581 "compare_and_write": false, 00:10:08.581 "abort": false, 00:10:08.581 "seek_hole": false, 00:10:08.581 "seek_data": false, 00:10:08.581 "copy": false, 00:10:08.581 "nvme_iov_md": false 00:10:08.581 }, 00:10:08.581 "memory_domains": [ 00:10:08.581 { 00:10:08.581 "dma_device_id": "system", 00:10:08.581 "dma_device_type": 1 00:10:08.581 }, 00:10:08.581 { 00:10:08.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.581 "dma_device_type": 2 00:10:08.581 }, 00:10:08.581 { 00:10:08.581 "dma_device_id": "system", 00:10:08.581 "dma_device_type": 1 00:10:08.581 }, 00:10:08.581 { 00:10:08.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.581 "dma_device_type": 2 00:10:08.581 }, 00:10:08.581 { 00:10:08.581 "dma_device_id": "system", 00:10:08.581 "dma_device_type": 1 00:10:08.581 }, 00:10:08.581 { 00:10:08.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.581 "dma_device_type": 2 00:10:08.581 }, 00:10:08.581 { 00:10:08.581 "dma_device_id": "system", 00:10:08.581 "dma_device_type": 1 00:10:08.581 }, 00:10:08.581 { 00:10:08.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.582 "dma_device_type": 2 00:10:08.582 } 00:10:08.582 ], 00:10:08.582 "driver_specific": { 00:10:08.582 "raid": { 00:10:08.582 "uuid": "385e17b9-720e-4635-bfeb-d58d1d661e40", 00:10:08.582 "strip_size_kb": 64, 00:10:08.582 "state": "online", 00:10:08.582 "raid_level": "raid0", 00:10:08.582 "superblock": true, 00:10:08.582 "num_base_bdevs": 4, 00:10:08.582 "num_base_bdevs_discovered": 4, 00:10:08.582 "num_base_bdevs_operational": 4, 00:10:08.582 "base_bdevs_list": [ 00:10:08.582 { 00:10:08.582 "name": "pt1", 00:10:08.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.582 "is_configured": true, 00:10:08.582 "data_offset": 2048, 00:10:08.582 "data_size": 63488 00:10:08.582 }, 00:10:08.582 { 00:10:08.582 "name": "pt2", 00:10:08.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.582 "is_configured": true, 00:10:08.582 "data_offset": 2048, 00:10:08.582 "data_size": 63488 00:10:08.582 }, 00:10:08.582 { 00:10:08.582 "name": "pt3", 00:10:08.582 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.582 "is_configured": true, 00:10:08.582 "data_offset": 2048, 00:10:08.582 "data_size": 63488 00:10:08.582 }, 00:10:08.582 { 00:10:08.582 "name": "pt4", 00:10:08.582 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:08.582 "is_configured": true, 00:10:08.582 "data_offset": 2048, 00:10:08.582 "data_size": 63488 00:10:08.582 } 00:10:08.582 ] 00:10:08.582 } 00:10:08.582 } 00:10:08.582 }' 00:10:08.582 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:08.842 pt2 00:10:08.842 pt3 00:10:08.842 pt4' 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:08.842 [2024-12-07 16:36:07.711403] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.842 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 385e17b9-720e-4635-bfeb-d58d1d661e40 '!=' 385e17b9-720e-4635-bfeb-d58d1d661e40 ']' 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81955 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81955 ']' 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81955 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81955 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81955' 00:10:09.102 killing process with pid 81955 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81955 00:10:09.102 [2024-12-07 16:36:07.792428] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:09.102 [2024-12-07 16:36:07.792607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.102 16:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81955 00:10:09.102 [2024-12-07 16:36:07.792722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.102 [2024-12-07 16:36:07.792770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:09.102 [2024-12-07 16:36:07.876603] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.362 16:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:09.362 00:10:09.362 real 0m4.444s 00:10:09.362 user 0m6.728s 00:10:09.362 sys 0m1.060s 00:10:09.362 16:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.362 16:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.362 ************************************ 00:10:09.362 END TEST raid_superblock_test 00:10:09.362 ************************************ 00:10:09.622 16:36:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:09.622 16:36:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:09.622 16:36:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.622 16:36:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.622 ************************************ 00:10:09.622 START TEST raid_read_error_test 00:10:09.622 ************************************ 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bolLT9ociS 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82213 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82213 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 82213 ']' 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.622 16:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.622 [2024-12-07 16:36:08.436852] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:09.623 [2024-12-07 16:36:08.437438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82213 ] 00:10:09.886 [2024-12-07 16:36:08.581867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.886 [2024-12-07 16:36:08.650281] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.886 [2024-12-07 16:36:08.726099] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.886 [2024-12-07 16:36:08.726234] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.457 BaseBdev1_malloc 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.457 true 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.457 [2024-12-07 16:36:09.316258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:10.457 [2024-12-07 16:36:09.316367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.457 [2024-12-07 16:36:09.316406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:10.457 [2024-12-07 16:36:09.316435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.457 [2024-12-07 16:36:09.318814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.457 [2024-12-07 16:36:09.318883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:10.457 BaseBdev1 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.457 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.718 BaseBdev2_malloc 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.718 true 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.718 [2024-12-07 16:36:09.373833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:10.718 [2024-12-07 16:36:09.373889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.718 [2024-12-07 16:36:09.373908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:10.718 [2024-12-07 16:36:09.373917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.718 [2024-12-07 16:36:09.376326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.718 [2024-12-07 16:36:09.376433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:10.718 BaseBdev2 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.718 BaseBdev3_malloc 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.718 true 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.718 [2024-12-07 16:36:09.420530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:10.718 [2024-12-07 16:36:09.420581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.718 [2024-12-07 16:36:09.420602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:10.718 [2024-12-07 16:36:09.420611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.718 [2024-12-07 16:36:09.423094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.718 [2024-12-07 16:36:09.423175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:10.718 BaseBdev3 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.718 BaseBdev4_malloc 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.718 true 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.718 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.718 [2024-12-07 16:36:09.467657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:10.719 [2024-12-07 16:36:09.467712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.719 [2024-12-07 16:36:09.467738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:10.719 [2024-12-07 16:36:09.467747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.719 [2024-12-07 16:36:09.470181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.719 [2024-12-07 16:36:09.470254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:10.719 BaseBdev4 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.719 [2024-12-07 16:36:09.479699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.719 [2024-12-07 16:36:09.481880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.719 [2024-12-07 16:36:09.481976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.719 [2024-12-07 16:36:09.482033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:10.719 [2024-12-07 16:36:09.482246] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:10.719 [2024-12-07 16:36:09.482259] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:10.719 [2024-12-07 16:36:09.482590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:10.719 [2024-12-07 16:36:09.482770] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:10.719 [2024-12-07 16:36:09.482790] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:10.719 [2024-12-07 16:36:09.482957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.719 "name": "raid_bdev1", 00:10:10.719 "uuid": "e6e3dcb8-e66a-4932-bae6-986a9d492a6f", 00:10:10.719 "strip_size_kb": 64, 00:10:10.719 "state": "online", 00:10:10.719 "raid_level": "raid0", 00:10:10.719 "superblock": true, 00:10:10.719 "num_base_bdevs": 4, 00:10:10.719 "num_base_bdevs_discovered": 4, 00:10:10.719 "num_base_bdevs_operational": 4, 00:10:10.719 "base_bdevs_list": [ 00:10:10.719 { 00:10:10.719 "name": "BaseBdev1", 00:10:10.719 "uuid": "e79f2d17-9efe-5560-a33a-00a197e909bd", 00:10:10.719 "is_configured": true, 00:10:10.719 "data_offset": 2048, 00:10:10.719 "data_size": 63488 00:10:10.719 }, 00:10:10.719 { 00:10:10.719 "name": "BaseBdev2", 00:10:10.719 "uuid": "5070cc02-41ae-5763-b214-d673b5a28b96", 00:10:10.719 "is_configured": true, 00:10:10.719 "data_offset": 2048, 00:10:10.719 "data_size": 63488 00:10:10.719 }, 00:10:10.719 { 00:10:10.719 "name": "BaseBdev3", 00:10:10.719 "uuid": "fa47934b-e9e7-58bc-beee-f85a7a06cac1", 00:10:10.719 "is_configured": true, 00:10:10.719 "data_offset": 2048, 00:10:10.719 "data_size": 63488 00:10:10.719 }, 00:10:10.719 { 00:10:10.719 "name": "BaseBdev4", 00:10:10.719 "uuid": "6e7599fc-94bf-5c06-9837-f4229d5ffe6c", 00:10:10.719 "is_configured": true, 00:10:10.719 "data_offset": 2048, 00:10:10.719 "data_size": 63488 00:10:10.719 } 00:10:10.719 ] 00:10:10.719 }' 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.719 16:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.979 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:10.979 16:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:11.239 [2024-12-07 16:36:09.955432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.180 "name": "raid_bdev1", 00:10:12.180 "uuid": "e6e3dcb8-e66a-4932-bae6-986a9d492a6f", 00:10:12.180 "strip_size_kb": 64, 00:10:12.180 "state": "online", 00:10:12.180 "raid_level": "raid0", 00:10:12.180 "superblock": true, 00:10:12.180 "num_base_bdevs": 4, 00:10:12.180 "num_base_bdevs_discovered": 4, 00:10:12.180 "num_base_bdevs_operational": 4, 00:10:12.180 "base_bdevs_list": [ 00:10:12.180 { 00:10:12.180 "name": "BaseBdev1", 00:10:12.180 "uuid": "e79f2d17-9efe-5560-a33a-00a197e909bd", 00:10:12.180 "is_configured": true, 00:10:12.180 "data_offset": 2048, 00:10:12.180 "data_size": 63488 00:10:12.180 }, 00:10:12.180 { 00:10:12.180 "name": "BaseBdev2", 00:10:12.180 "uuid": "5070cc02-41ae-5763-b214-d673b5a28b96", 00:10:12.180 "is_configured": true, 00:10:12.180 "data_offset": 2048, 00:10:12.180 "data_size": 63488 00:10:12.180 }, 00:10:12.180 { 00:10:12.180 "name": "BaseBdev3", 00:10:12.180 "uuid": "fa47934b-e9e7-58bc-beee-f85a7a06cac1", 00:10:12.180 "is_configured": true, 00:10:12.180 "data_offset": 2048, 00:10:12.180 "data_size": 63488 00:10:12.180 }, 00:10:12.180 { 00:10:12.180 "name": "BaseBdev4", 00:10:12.180 "uuid": "6e7599fc-94bf-5c06-9837-f4229d5ffe6c", 00:10:12.180 "is_configured": true, 00:10:12.180 "data_offset": 2048, 00:10:12.180 "data_size": 63488 00:10:12.180 } 00:10:12.180 ] 00:10:12.180 }' 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.180 16:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.440 16:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:12.441 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.441 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.441 [2024-12-07 16:36:11.296328] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:12.441 [2024-12-07 16:36:11.296375] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.441 [2024-12-07 16:36:11.298891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.441 [2024-12-07 16:36:11.298980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.441 [2024-12-07 16:36:11.299036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.441 [2024-12-07 16:36:11.299047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:12.441 { 00:10:12.441 "results": [ 00:10:12.441 { 00:10:12.441 "job": "raid_bdev1", 00:10:12.441 "core_mask": "0x1", 00:10:12.441 "workload": "randrw", 00:10:12.441 "percentage": 50, 00:10:12.441 "status": "finished", 00:10:12.441 "queue_depth": 1, 00:10:12.441 "io_size": 131072, 00:10:12.441 "runtime": 1.34123, 00:10:12.441 "iops": 14152.68074826838, 00:10:12.441 "mibps": 1769.0850935335475, 00:10:12.441 "io_failed": 1, 00:10:12.441 "io_timeout": 0, 00:10:12.441 "avg_latency_us": 99.65666858441718, 00:10:12.441 "min_latency_us": 25.6, 00:10:12.441 "max_latency_us": 1352.216593886463 00:10:12.441 } 00:10:12.441 ], 00:10:12.441 "core_count": 1 00:10:12.441 } 00:10:12.441 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.441 16:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82213 00:10:12.441 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 82213 ']' 00:10:12.441 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 82213 00:10:12.441 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:12.441 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:12.441 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82213 00:10:12.742 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:12.742 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:12.742 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82213' 00:10:12.742 killing process with pid 82213 00:10:12.742 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 82213 00:10:12.742 [2024-12-07 16:36:11.350755] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:12.742 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 82213 00:10:12.742 [2024-12-07 16:36:11.420925] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.009 16:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:13.009 16:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bolLT9ociS 00:10:13.009 16:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:13.009 ************************************ 00:10:13.009 END TEST raid_read_error_test 00:10:13.009 ************************************ 00:10:13.009 16:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:13.009 16:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:13.009 16:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.009 16:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:13.009 16:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:13.009 00:10:13.009 real 0m3.472s 00:10:13.009 user 0m4.158s 00:10:13.009 sys 0m0.657s 00:10:13.009 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.009 16:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.009 16:36:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:13.009 16:36:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:13.009 16:36:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.009 16:36:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.009 ************************************ 00:10:13.009 START TEST raid_write_error_test 00:10:13.009 ************************************ 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.70k1ZTrv0c 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82343 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82343 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82343 ']' 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.009 16:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.010 16:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.010 16:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.269 [2024-12-07 16:36:11.985010] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:13.269 [2024-12-07 16:36:11.985735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82343 ] 00:10:13.269 [2024-12-07 16:36:12.147192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.529 [2024-12-07 16:36:12.226337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.529 [2024-12-07 16:36:12.304491] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.529 [2024-12-07 16:36:12.304538] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 BaseBdev1_malloc 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 true 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 [2024-12-07 16:36:12.860078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:14.100 [2024-12-07 16:36:12.860183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.100 [2024-12-07 16:36:12.860220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:14.100 [2024-12-07 16:36:12.860236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.100 [2024-12-07 16:36:12.862656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.100 [2024-12-07 16:36:12.862691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:14.100 BaseBdev1 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 BaseBdev2_malloc 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 true 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 [2024-12-07 16:36:12.917665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:14.100 [2024-12-07 16:36:12.917714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.100 [2024-12-07 16:36:12.917736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:14.100 [2024-12-07 16:36:12.917746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.100 [2024-12-07 16:36:12.920114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.100 [2024-12-07 16:36:12.920149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:14.100 BaseBdev2 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:14.100 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.101 BaseBdev3_malloc 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.101 true 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.101 [2024-12-07 16:36:12.964268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:14.101 [2024-12-07 16:36:12.964322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.101 [2024-12-07 16:36:12.964345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:14.101 [2024-12-07 16:36:12.964367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.101 [2024-12-07 16:36:12.966690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.101 [2024-12-07 16:36:12.966765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:14.101 BaseBdev3 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.101 BaseBdev4_malloc 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.101 16:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.361 true 00:10:14.361 16:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.361 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:14.361 16:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.361 16:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.361 [2024-12-07 16:36:13.010763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:14.361 [2024-12-07 16:36:13.010807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.361 [2024-12-07 16:36:13.010833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:14.361 [2024-12-07 16:36:13.010842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.361 [2024-12-07 16:36:13.013194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.361 [2024-12-07 16:36:13.013228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:14.361 BaseBdev4 00:10:14.361 16:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.361 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:14.361 16:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.362 [2024-12-07 16:36:13.022801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.362 [2024-12-07 16:36:13.025069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.362 [2024-12-07 16:36:13.025203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.362 [2024-12-07 16:36:13.025261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:14.362 [2024-12-07 16:36:13.025469] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:14.362 [2024-12-07 16:36:13.025482] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:14.362 [2024-12-07 16:36:13.025739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:14.362 [2024-12-07 16:36:13.025886] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:14.362 [2024-12-07 16:36:13.025899] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:14.362 [2024-12-07 16:36:13.026019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.362 "name": "raid_bdev1", 00:10:14.362 "uuid": "97b5027d-f86a-4f1f-99ce-491f6fc804af", 00:10:14.362 "strip_size_kb": 64, 00:10:14.362 "state": "online", 00:10:14.362 "raid_level": "raid0", 00:10:14.362 "superblock": true, 00:10:14.362 "num_base_bdevs": 4, 00:10:14.362 "num_base_bdevs_discovered": 4, 00:10:14.362 "num_base_bdevs_operational": 4, 00:10:14.362 "base_bdevs_list": [ 00:10:14.362 { 00:10:14.362 "name": "BaseBdev1", 00:10:14.362 "uuid": "4b4ce3a0-d5d1-5f11-bcab-7f4220547d91", 00:10:14.362 "is_configured": true, 00:10:14.362 "data_offset": 2048, 00:10:14.362 "data_size": 63488 00:10:14.362 }, 00:10:14.362 { 00:10:14.362 "name": "BaseBdev2", 00:10:14.362 "uuid": "b48bb7c6-2d30-5c1e-8bc2-a1ac53e74fc0", 00:10:14.362 "is_configured": true, 00:10:14.362 "data_offset": 2048, 00:10:14.362 "data_size": 63488 00:10:14.362 }, 00:10:14.362 { 00:10:14.362 "name": "BaseBdev3", 00:10:14.362 "uuid": "0a186867-bebe-5379-94f9-6901f745dd16", 00:10:14.362 "is_configured": true, 00:10:14.362 "data_offset": 2048, 00:10:14.362 "data_size": 63488 00:10:14.362 }, 00:10:14.362 { 00:10:14.362 "name": "BaseBdev4", 00:10:14.362 "uuid": "66756117-6c41-565e-abfa-b92bff247356", 00:10:14.362 "is_configured": true, 00:10:14.362 "data_offset": 2048, 00:10:14.362 "data_size": 63488 00:10:14.362 } 00:10:14.362 ] 00:10:14.362 }' 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.362 16:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.622 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:14.622 16:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:14.622 [2024-12-07 16:36:13.494511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:15.560 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:15.560 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.560 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.560 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.561 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.819 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.819 "name": "raid_bdev1", 00:10:15.819 "uuid": "97b5027d-f86a-4f1f-99ce-491f6fc804af", 00:10:15.819 "strip_size_kb": 64, 00:10:15.819 "state": "online", 00:10:15.819 "raid_level": "raid0", 00:10:15.819 "superblock": true, 00:10:15.819 "num_base_bdevs": 4, 00:10:15.819 "num_base_bdevs_discovered": 4, 00:10:15.819 "num_base_bdevs_operational": 4, 00:10:15.819 "base_bdevs_list": [ 00:10:15.819 { 00:10:15.819 "name": "BaseBdev1", 00:10:15.819 "uuid": "4b4ce3a0-d5d1-5f11-bcab-7f4220547d91", 00:10:15.819 "is_configured": true, 00:10:15.819 "data_offset": 2048, 00:10:15.819 "data_size": 63488 00:10:15.819 }, 00:10:15.819 { 00:10:15.819 "name": "BaseBdev2", 00:10:15.819 "uuid": "b48bb7c6-2d30-5c1e-8bc2-a1ac53e74fc0", 00:10:15.819 "is_configured": true, 00:10:15.819 "data_offset": 2048, 00:10:15.819 "data_size": 63488 00:10:15.819 }, 00:10:15.819 { 00:10:15.819 "name": "BaseBdev3", 00:10:15.819 "uuid": "0a186867-bebe-5379-94f9-6901f745dd16", 00:10:15.819 "is_configured": true, 00:10:15.819 "data_offset": 2048, 00:10:15.819 "data_size": 63488 00:10:15.819 }, 00:10:15.819 { 00:10:15.819 "name": "BaseBdev4", 00:10:15.819 "uuid": "66756117-6c41-565e-abfa-b92bff247356", 00:10:15.819 "is_configured": true, 00:10:15.819 "data_offset": 2048, 00:10:15.819 "data_size": 63488 00:10:15.819 } 00:10:15.819 ] 00:10:15.819 }' 00:10:15.819 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.819 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.078 [2024-12-07 16:36:14.871752] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:16.078 [2024-12-07 16:36:14.871803] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.078 [2024-12-07 16:36:14.874456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.078 [2024-12-07 16:36:14.874552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.078 [2024-12-07 16:36:14.874640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.078 [2024-12-07 16:36:14.874684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:16.078 { 00:10:16.078 "results": [ 00:10:16.078 { 00:10:16.078 "job": "raid_bdev1", 00:10:16.078 "core_mask": "0x1", 00:10:16.078 "workload": "randrw", 00:10:16.078 "percentage": 50, 00:10:16.078 "status": "finished", 00:10:16.078 "queue_depth": 1, 00:10:16.078 "io_size": 131072, 00:10:16.078 "runtime": 1.377644, 00:10:16.078 "iops": 14164.76244951526, 00:10:16.078 "mibps": 1770.5953061894074, 00:10:16.078 "io_failed": 1, 00:10:16.078 "io_timeout": 0, 00:10:16.078 "avg_latency_us": 99.57330988255592, 00:10:16.078 "min_latency_us": 25.041048034934498, 00:10:16.078 "max_latency_us": 1502.46288209607 00:10:16.078 } 00:10:16.078 ], 00:10:16.078 "core_count": 1 00:10:16.078 } 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82343 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82343 ']' 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82343 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82343 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82343' 00:10:16.078 killing process with pid 82343 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82343 00:10:16.078 [2024-12-07 16:36:14.912271] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:16.078 16:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82343 00:10:16.340 [2024-12-07 16:36:14.981251] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.600 16:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:16.600 16:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.70k1ZTrv0c 00:10:16.600 16:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:16.600 16:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:16.600 ************************************ 00:10:16.600 END TEST raid_write_error_test 00:10:16.600 ************************************ 00:10:16.600 16:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:16.600 16:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.600 16:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:16.600 16:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:16.600 00:10:16.600 real 0m3.491s 00:10:16.600 user 0m4.177s 00:10:16.600 sys 0m0.672s 00:10:16.600 16:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.600 16:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.600 16:36:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:16.600 16:36:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:16.600 16:36:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:16.600 16:36:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.600 16:36:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.600 ************************************ 00:10:16.600 START TEST raid_state_function_test 00:10:16.600 ************************************ 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:16.600 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82476 00:10:16.601 Process raid pid: 82476 00:10:16.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82476' 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82476 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82476 ']' 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.601 16:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.860 [2024-12-07 16:36:15.531560] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:16.860 [2024-12-07 16:36:15.532145] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.860 [2024-12-07 16:36:15.691505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.120 [2024-12-07 16:36:15.768729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.120 [2024-12-07 16:36:15.845122] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.120 [2024-12-07 16:36:15.845258] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.689 [2024-12-07 16:36:16.384645] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.689 [2024-12-07 16:36:16.384764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.689 [2024-12-07 16:36:16.384810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.689 [2024-12-07 16:36:16.384834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.689 [2024-12-07 16:36:16.384852] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.689 [2024-12-07 16:36:16.384876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.689 [2024-12-07 16:36:16.384893] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:17.689 [2024-12-07 16:36:16.384927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.689 "name": "Existed_Raid", 00:10:17.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.689 "strip_size_kb": 64, 00:10:17.689 "state": "configuring", 00:10:17.689 "raid_level": "concat", 00:10:17.689 "superblock": false, 00:10:17.689 "num_base_bdevs": 4, 00:10:17.689 "num_base_bdevs_discovered": 0, 00:10:17.689 "num_base_bdevs_operational": 4, 00:10:17.689 "base_bdevs_list": [ 00:10:17.689 { 00:10:17.689 "name": "BaseBdev1", 00:10:17.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.689 "is_configured": false, 00:10:17.689 "data_offset": 0, 00:10:17.689 "data_size": 0 00:10:17.689 }, 00:10:17.689 { 00:10:17.689 "name": "BaseBdev2", 00:10:17.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.689 "is_configured": false, 00:10:17.689 "data_offset": 0, 00:10:17.689 "data_size": 0 00:10:17.689 }, 00:10:17.689 { 00:10:17.689 "name": "BaseBdev3", 00:10:17.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.689 "is_configured": false, 00:10:17.689 "data_offset": 0, 00:10:17.689 "data_size": 0 00:10:17.689 }, 00:10:17.689 { 00:10:17.689 "name": "BaseBdev4", 00:10:17.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.689 "is_configured": false, 00:10:17.689 "data_offset": 0, 00:10:17.689 "data_size": 0 00:10:17.689 } 00:10:17.689 ] 00:10:17.689 }' 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.689 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.949 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.949 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.949 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.949 [2024-12-07 16:36:16.831769] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.949 [2024-12-07 16:36:16.831863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:17.949 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.949 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.949 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.949 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.949 [2024-12-07 16:36:16.843783] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.949 [2024-12-07 16:36:16.843862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.949 [2024-12-07 16:36:16.843889] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.949 [2024-12-07 16:36:16.843913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.949 [2024-12-07 16:36:16.843932] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.949 [2024-12-07 16:36:16.843954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.949 [2024-12-07 16:36:16.843972] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:17.949 [2024-12-07 16:36:16.843994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.209 [2024-12-07 16:36:16.870817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.209 BaseBdev1 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.209 [ 00:10:18.209 { 00:10:18.209 "name": "BaseBdev1", 00:10:18.209 "aliases": [ 00:10:18.209 "e0af7e1d-1256-468d-afe9-b48ad620f3b5" 00:10:18.209 ], 00:10:18.209 "product_name": "Malloc disk", 00:10:18.209 "block_size": 512, 00:10:18.209 "num_blocks": 65536, 00:10:18.209 "uuid": "e0af7e1d-1256-468d-afe9-b48ad620f3b5", 00:10:18.209 "assigned_rate_limits": { 00:10:18.209 "rw_ios_per_sec": 0, 00:10:18.209 "rw_mbytes_per_sec": 0, 00:10:18.209 "r_mbytes_per_sec": 0, 00:10:18.209 "w_mbytes_per_sec": 0 00:10:18.209 }, 00:10:18.209 "claimed": true, 00:10:18.209 "claim_type": "exclusive_write", 00:10:18.209 "zoned": false, 00:10:18.209 "supported_io_types": { 00:10:18.209 "read": true, 00:10:18.209 "write": true, 00:10:18.209 "unmap": true, 00:10:18.209 "flush": true, 00:10:18.209 "reset": true, 00:10:18.209 "nvme_admin": false, 00:10:18.209 "nvme_io": false, 00:10:18.209 "nvme_io_md": false, 00:10:18.209 "write_zeroes": true, 00:10:18.209 "zcopy": true, 00:10:18.209 "get_zone_info": false, 00:10:18.209 "zone_management": false, 00:10:18.209 "zone_append": false, 00:10:18.209 "compare": false, 00:10:18.209 "compare_and_write": false, 00:10:18.209 "abort": true, 00:10:18.209 "seek_hole": false, 00:10:18.209 "seek_data": false, 00:10:18.209 "copy": true, 00:10:18.209 "nvme_iov_md": false 00:10:18.209 }, 00:10:18.209 "memory_domains": [ 00:10:18.209 { 00:10:18.209 "dma_device_id": "system", 00:10:18.209 "dma_device_type": 1 00:10:18.209 }, 00:10:18.209 { 00:10:18.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.209 "dma_device_type": 2 00:10:18.209 } 00:10:18.209 ], 00:10:18.209 "driver_specific": {} 00:10:18.209 } 00:10:18.209 ] 00:10:18.209 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.210 "name": "Existed_Raid", 00:10:18.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.210 "strip_size_kb": 64, 00:10:18.210 "state": "configuring", 00:10:18.210 "raid_level": "concat", 00:10:18.210 "superblock": false, 00:10:18.210 "num_base_bdevs": 4, 00:10:18.210 "num_base_bdevs_discovered": 1, 00:10:18.210 "num_base_bdevs_operational": 4, 00:10:18.210 "base_bdevs_list": [ 00:10:18.210 { 00:10:18.210 "name": "BaseBdev1", 00:10:18.210 "uuid": "e0af7e1d-1256-468d-afe9-b48ad620f3b5", 00:10:18.210 "is_configured": true, 00:10:18.210 "data_offset": 0, 00:10:18.210 "data_size": 65536 00:10:18.210 }, 00:10:18.210 { 00:10:18.210 "name": "BaseBdev2", 00:10:18.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.210 "is_configured": false, 00:10:18.210 "data_offset": 0, 00:10:18.210 "data_size": 0 00:10:18.210 }, 00:10:18.210 { 00:10:18.210 "name": "BaseBdev3", 00:10:18.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.210 "is_configured": false, 00:10:18.210 "data_offset": 0, 00:10:18.210 "data_size": 0 00:10:18.210 }, 00:10:18.210 { 00:10:18.210 "name": "BaseBdev4", 00:10:18.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.210 "is_configured": false, 00:10:18.210 "data_offset": 0, 00:10:18.210 "data_size": 0 00:10:18.210 } 00:10:18.210 ] 00:10:18.210 }' 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.210 16:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.469 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:18.469 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.469 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.469 [2024-12-07 16:36:17.334138] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:18.469 [2024-12-07 16:36:17.334287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:18.469 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.469 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:18.469 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.469 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.469 [2024-12-07 16:36:17.346177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.470 [2024-12-07 16:36:17.348527] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.470 [2024-12-07 16:36:17.348603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.470 [2024-12-07 16:36:17.348632] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:18.470 [2024-12-07 16:36:17.348654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:18.470 [2024-12-07 16:36:17.348671] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:18.470 [2024-12-07 16:36:17.348691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.470 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.729 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.729 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.729 "name": "Existed_Raid", 00:10:18.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.729 "strip_size_kb": 64, 00:10:18.729 "state": "configuring", 00:10:18.729 "raid_level": "concat", 00:10:18.729 "superblock": false, 00:10:18.729 "num_base_bdevs": 4, 00:10:18.729 "num_base_bdevs_discovered": 1, 00:10:18.729 "num_base_bdevs_operational": 4, 00:10:18.729 "base_bdevs_list": [ 00:10:18.729 { 00:10:18.729 "name": "BaseBdev1", 00:10:18.729 "uuid": "e0af7e1d-1256-468d-afe9-b48ad620f3b5", 00:10:18.729 "is_configured": true, 00:10:18.729 "data_offset": 0, 00:10:18.729 "data_size": 65536 00:10:18.729 }, 00:10:18.729 { 00:10:18.729 "name": "BaseBdev2", 00:10:18.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.729 "is_configured": false, 00:10:18.729 "data_offset": 0, 00:10:18.729 "data_size": 0 00:10:18.729 }, 00:10:18.729 { 00:10:18.729 "name": "BaseBdev3", 00:10:18.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.729 "is_configured": false, 00:10:18.729 "data_offset": 0, 00:10:18.729 "data_size": 0 00:10:18.729 }, 00:10:18.729 { 00:10:18.729 "name": "BaseBdev4", 00:10:18.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.729 "is_configured": false, 00:10:18.729 "data_offset": 0, 00:10:18.729 "data_size": 0 00:10:18.729 } 00:10:18.729 ] 00:10:18.729 }' 00:10:18.729 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.729 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.990 [2024-12-07 16:36:17.839318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.990 BaseBdev2 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.990 [ 00:10:18.990 { 00:10:18.990 "name": "BaseBdev2", 00:10:18.990 "aliases": [ 00:10:18.990 "cd25fabe-e9fb-4a94-bee2-6e099c352258" 00:10:18.990 ], 00:10:18.990 "product_name": "Malloc disk", 00:10:18.990 "block_size": 512, 00:10:18.990 "num_blocks": 65536, 00:10:18.990 "uuid": "cd25fabe-e9fb-4a94-bee2-6e099c352258", 00:10:18.990 "assigned_rate_limits": { 00:10:18.990 "rw_ios_per_sec": 0, 00:10:18.990 "rw_mbytes_per_sec": 0, 00:10:18.990 "r_mbytes_per_sec": 0, 00:10:18.990 "w_mbytes_per_sec": 0 00:10:18.990 }, 00:10:18.990 "claimed": true, 00:10:18.990 "claim_type": "exclusive_write", 00:10:18.990 "zoned": false, 00:10:18.990 "supported_io_types": { 00:10:18.990 "read": true, 00:10:18.990 "write": true, 00:10:18.990 "unmap": true, 00:10:18.990 "flush": true, 00:10:18.990 "reset": true, 00:10:18.990 "nvme_admin": false, 00:10:18.990 "nvme_io": false, 00:10:18.990 "nvme_io_md": false, 00:10:18.990 "write_zeroes": true, 00:10:18.990 "zcopy": true, 00:10:18.990 "get_zone_info": false, 00:10:18.990 "zone_management": false, 00:10:18.990 "zone_append": false, 00:10:18.990 "compare": false, 00:10:18.990 "compare_and_write": false, 00:10:18.990 "abort": true, 00:10:18.990 "seek_hole": false, 00:10:18.990 "seek_data": false, 00:10:18.990 "copy": true, 00:10:18.990 "nvme_iov_md": false 00:10:18.990 }, 00:10:18.990 "memory_domains": [ 00:10:18.990 { 00:10:18.990 "dma_device_id": "system", 00:10:18.990 "dma_device_type": 1 00:10:18.990 }, 00:10:18.990 { 00:10:18.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.990 "dma_device_type": 2 00:10:18.990 } 00:10:18.990 ], 00:10:18.990 "driver_specific": {} 00:10:18.990 } 00:10:18.990 ] 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.990 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.991 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.991 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.991 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.991 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.991 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.991 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.991 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.251 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.251 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.251 "name": "Existed_Raid", 00:10:19.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.251 "strip_size_kb": 64, 00:10:19.251 "state": "configuring", 00:10:19.251 "raid_level": "concat", 00:10:19.251 "superblock": false, 00:10:19.251 "num_base_bdevs": 4, 00:10:19.251 "num_base_bdevs_discovered": 2, 00:10:19.251 "num_base_bdevs_operational": 4, 00:10:19.251 "base_bdevs_list": [ 00:10:19.251 { 00:10:19.251 "name": "BaseBdev1", 00:10:19.251 "uuid": "e0af7e1d-1256-468d-afe9-b48ad620f3b5", 00:10:19.251 "is_configured": true, 00:10:19.251 "data_offset": 0, 00:10:19.251 "data_size": 65536 00:10:19.251 }, 00:10:19.251 { 00:10:19.251 "name": "BaseBdev2", 00:10:19.251 "uuid": "cd25fabe-e9fb-4a94-bee2-6e099c352258", 00:10:19.251 "is_configured": true, 00:10:19.251 "data_offset": 0, 00:10:19.251 "data_size": 65536 00:10:19.251 }, 00:10:19.251 { 00:10:19.251 "name": "BaseBdev3", 00:10:19.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.251 "is_configured": false, 00:10:19.251 "data_offset": 0, 00:10:19.251 "data_size": 0 00:10:19.251 }, 00:10:19.251 { 00:10:19.251 "name": "BaseBdev4", 00:10:19.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.251 "is_configured": false, 00:10:19.251 "data_offset": 0, 00:10:19.251 "data_size": 0 00:10:19.251 } 00:10:19.251 ] 00:10:19.251 }' 00:10:19.251 16:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.251 16:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.510 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:19.510 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.510 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.510 [2024-12-07 16:36:18.315874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.510 BaseBdev3 00:10:19.510 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.510 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:19.510 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.511 [ 00:10:19.511 { 00:10:19.511 "name": "BaseBdev3", 00:10:19.511 "aliases": [ 00:10:19.511 "b289cf03-06be-4c09-8881-d577aa41162b" 00:10:19.511 ], 00:10:19.511 "product_name": "Malloc disk", 00:10:19.511 "block_size": 512, 00:10:19.511 "num_blocks": 65536, 00:10:19.511 "uuid": "b289cf03-06be-4c09-8881-d577aa41162b", 00:10:19.511 "assigned_rate_limits": { 00:10:19.511 "rw_ios_per_sec": 0, 00:10:19.511 "rw_mbytes_per_sec": 0, 00:10:19.511 "r_mbytes_per_sec": 0, 00:10:19.511 "w_mbytes_per_sec": 0 00:10:19.511 }, 00:10:19.511 "claimed": true, 00:10:19.511 "claim_type": "exclusive_write", 00:10:19.511 "zoned": false, 00:10:19.511 "supported_io_types": { 00:10:19.511 "read": true, 00:10:19.511 "write": true, 00:10:19.511 "unmap": true, 00:10:19.511 "flush": true, 00:10:19.511 "reset": true, 00:10:19.511 "nvme_admin": false, 00:10:19.511 "nvme_io": false, 00:10:19.511 "nvme_io_md": false, 00:10:19.511 "write_zeroes": true, 00:10:19.511 "zcopy": true, 00:10:19.511 "get_zone_info": false, 00:10:19.511 "zone_management": false, 00:10:19.511 "zone_append": false, 00:10:19.511 "compare": false, 00:10:19.511 "compare_and_write": false, 00:10:19.511 "abort": true, 00:10:19.511 "seek_hole": false, 00:10:19.511 "seek_data": false, 00:10:19.511 "copy": true, 00:10:19.511 "nvme_iov_md": false 00:10:19.511 }, 00:10:19.511 "memory_domains": [ 00:10:19.511 { 00:10:19.511 "dma_device_id": "system", 00:10:19.511 "dma_device_type": 1 00:10:19.511 }, 00:10:19.511 { 00:10:19.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.511 "dma_device_type": 2 00:10:19.511 } 00:10:19.511 ], 00:10:19.511 "driver_specific": {} 00:10:19.511 } 00:10:19.511 ] 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.511 "name": "Existed_Raid", 00:10:19.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.511 "strip_size_kb": 64, 00:10:19.511 "state": "configuring", 00:10:19.511 "raid_level": "concat", 00:10:19.511 "superblock": false, 00:10:19.511 "num_base_bdevs": 4, 00:10:19.511 "num_base_bdevs_discovered": 3, 00:10:19.511 "num_base_bdevs_operational": 4, 00:10:19.511 "base_bdevs_list": [ 00:10:19.511 { 00:10:19.511 "name": "BaseBdev1", 00:10:19.511 "uuid": "e0af7e1d-1256-468d-afe9-b48ad620f3b5", 00:10:19.511 "is_configured": true, 00:10:19.511 "data_offset": 0, 00:10:19.511 "data_size": 65536 00:10:19.511 }, 00:10:19.511 { 00:10:19.511 "name": "BaseBdev2", 00:10:19.511 "uuid": "cd25fabe-e9fb-4a94-bee2-6e099c352258", 00:10:19.511 "is_configured": true, 00:10:19.511 "data_offset": 0, 00:10:19.511 "data_size": 65536 00:10:19.511 }, 00:10:19.511 { 00:10:19.511 "name": "BaseBdev3", 00:10:19.511 "uuid": "b289cf03-06be-4c09-8881-d577aa41162b", 00:10:19.511 "is_configured": true, 00:10:19.511 "data_offset": 0, 00:10:19.511 "data_size": 65536 00:10:19.511 }, 00:10:19.511 { 00:10:19.511 "name": "BaseBdev4", 00:10:19.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.511 "is_configured": false, 00:10:19.511 "data_offset": 0, 00:10:19.511 "data_size": 0 00:10:19.511 } 00:10:19.511 ] 00:10:19.511 }' 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.511 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.079 [2024-12-07 16:36:18.836344] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:20.079 [2024-12-07 16:36:18.836509] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:20.079 [2024-12-07 16:36:18.836536] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:20.079 [2024-12-07 16:36:18.836904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:20.079 [2024-12-07 16:36:18.837122] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:20.079 [2024-12-07 16:36:18.837166] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:20.079 [2024-12-07 16:36:18.837464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.079 BaseBdev4 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.079 [ 00:10:20.079 { 00:10:20.079 "name": "BaseBdev4", 00:10:20.079 "aliases": [ 00:10:20.079 "7ed25e7d-5637-40d6-8598-139d62c9140e" 00:10:20.079 ], 00:10:20.079 "product_name": "Malloc disk", 00:10:20.079 "block_size": 512, 00:10:20.079 "num_blocks": 65536, 00:10:20.079 "uuid": "7ed25e7d-5637-40d6-8598-139d62c9140e", 00:10:20.079 "assigned_rate_limits": { 00:10:20.079 "rw_ios_per_sec": 0, 00:10:20.079 "rw_mbytes_per_sec": 0, 00:10:20.079 "r_mbytes_per_sec": 0, 00:10:20.079 "w_mbytes_per_sec": 0 00:10:20.079 }, 00:10:20.079 "claimed": true, 00:10:20.079 "claim_type": "exclusive_write", 00:10:20.079 "zoned": false, 00:10:20.079 "supported_io_types": { 00:10:20.079 "read": true, 00:10:20.079 "write": true, 00:10:20.079 "unmap": true, 00:10:20.079 "flush": true, 00:10:20.079 "reset": true, 00:10:20.079 "nvme_admin": false, 00:10:20.079 "nvme_io": false, 00:10:20.079 "nvme_io_md": false, 00:10:20.079 "write_zeroes": true, 00:10:20.079 "zcopy": true, 00:10:20.079 "get_zone_info": false, 00:10:20.079 "zone_management": false, 00:10:20.079 "zone_append": false, 00:10:20.079 "compare": false, 00:10:20.079 "compare_and_write": false, 00:10:20.079 "abort": true, 00:10:20.079 "seek_hole": false, 00:10:20.079 "seek_data": false, 00:10:20.079 "copy": true, 00:10:20.079 "nvme_iov_md": false 00:10:20.079 }, 00:10:20.079 "memory_domains": [ 00:10:20.079 { 00:10:20.079 "dma_device_id": "system", 00:10:20.079 "dma_device_type": 1 00:10:20.079 }, 00:10:20.079 { 00:10:20.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.079 "dma_device_type": 2 00:10:20.079 } 00:10:20.079 ], 00:10:20.079 "driver_specific": {} 00:10:20.079 } 00:10:20.079 ] 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.079 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.080 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.080 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.080 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.080 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.080 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.080 "name": "Existed_Raid", 00:10:20.080 "uuid": "45449a1e-e0f5-458a-ace5-6d5d504e207e", 00:10:20.080 "strip_size_kb": 64, 00:10:20.080 "state": "online", 00:10:20.080 "raid_level": "concat", 00:10:20.080 "superblock": false, 00:10:20.080 "num_base_bdevs": 4, 00:10:20.080 "num_base_bdevs_discovered": 4, 00:10:20.080 "num_base_bdevs_operational": 4, 00:10:20.080 "base_bdevs_list": [ 00:10:20.080 { 00:10:20.080 "name": "BaseBdev1", 00:10:20.080 "uuid": "e0af7e1d-1256-468d-afe9-b48ad620f3b5", 00:10:20.080 "is_configured": true, 00:10:20.080 "data_offset": 0, 00:10:20.080 "data_size": 65536 00:10:20.080 }, 00:10:20.080 { 00:10:20.080 "name": "BaseBdev2", 00:10:20.080 "uuid": "cd25fabe-e9fb-4a94-bee2-6e099c352258", 00:10:20.080 "is_configured": true, 00:10:20.080 "data_offset": 0, 00:10:20.080 "data_size": 65536 00:10:20.080 }, 00:10:20.080 { 00:10:20.080 "name": "BaseBdev3", 00:10:20.080 "uuid": "b289cf03-06be-4c09-8881-d577aa41162b", 00:10:20.080 "is_configured": true, 00:10:20.080 "data_offset": 0, 00:10:20.080 "data_size": 65536 00:10:20.080 }, 00:10:20.080 { 00:10:20.080 "name": "BaseBdev4", 00:10:20.080 "uuid": "7ed25e7d-5637-40d6-8598-139d62c9140e", 00:10:20.080 "is_configured": true, 00:10:20.080 "data_offset": 0, 00:10:20.080 "data_size": 65536 00:10:20.080 } 00:10:20.080 ] 00:10:20.080 }' 00:10:20.080 16:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.080 16:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.649 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:20.649 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:20.649 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.649 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.649 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.649 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.649 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:20.649 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.649 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.649 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.649 [2024-12-07 16:36:19.319930] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.649 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.649 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.649 "name": "Existed_Raid", 00:10:20.649 "aliases": [ 00:10:20.649 "45449a1e-e0f5-458a-ace5-6d5d504e207e" 00:10:20.649 ], 00:10:20.649 "product_name": "Raid Volume", 00:10:20.649 "block_size": 512, 00:10:20.649 "num_blocks": 262144, 00:10:20.649 "uuid": "45449a1e-e0f5-458a-ace5-6d5d504e207e", 00:10:20.649 "assigned_rate_limits": { 00:10:20.649 "rw_ios_per_sec": 0, 00:10:20.649 "rw_mbytes_per_sec": 0, 00:10:20.649 "r_mbytes_per_sec": 0, 00:10:20.649 "w_mbytes_per_sec": 0 00:10:20.649 }, 00:10:20.649 "claimed": false, 00:10:20.649 "zoned": false, 00:10:20.649 "supported_io_types": { 00:10:20.649 "read": true, 00:10:20.649 "write": true, 00:10:20.649 "unmap": true, 00:10:20.649 "flush": true, 00:10:20.649 "reset": true, 00:10:20.649 "nvme_admin": false, 00:10:20.649 "nvme_io": false, 00:10:20.649 "nvme_io_md": false, 00:10:20.649 "write_zeroes": true, 00:10:20.649 "zcopy": false, 00:10:20.649 "get_zone_info": false, 00:10:20.649 "zone_management": false, 00:10:20.649 "zone_append": false, 00:10:20.649 "compare": false, 00:10:20.649 "compare_and_write": false, 00:10:20.649 "abort": false, 00:10:20.650 "seek_hole": false, 00:10:20.650 "seek_data": false, 00:10:20.650 "copy": false, 00:10:20.650 "nvme_iov_md": false 00:10:20.650 }, 00:10:20.650 "memory_domains": [ 00:10:20.650 { 00:10:20.650 "dma_device_id": "system", 00:10:20.650 "dma_device_type": 1 00:10:20.650 }, 00:10:20.650 { 00:10:20.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.650 "dma_device_type": 2 00:10:20.650 }, 00:10:20.650 { 00:10:20.650 "dma_device_id": "system", 00:10:20.650 "dma_device_type": 1 00:10:20.650 }, 00:10:20.650 { 00:10:20.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.650 "dma_device_type": 2 00:10:20.650 }, 00:10:20.650 { 00:10:20.650 "dma_device_id": "system", 00:10:20.650 "dma_device_type": 1 00:10:20.650 }, 00:10:20.650 { 00:10:20.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.650 "dma_device_type": 2 00:10:20.650 }, 00:10:20.650 { 00:10:20.650 "dma_device_id": "system", 00:10:20.650 "dma_device_type": 1 00:10:20.650 }, 00:10:20.650 { 00:10:20.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.650 "dma_device_type": 2 00:10:20.650 } 00:10:20.650 ], 00:10:20.650 "driver_specific": { 00:10:20.650 "raid": { 00:10:20.650 "uuid": "45449a1e-e0f5-458a-ace5-6d5d504e207e", 00:10:20.650 "strip_size_kb": 64, 00:10:20.650 "state": "online", 00:10:20.650 "raid_level": "concat", 00:10:20.650 "superblock": false, 00:10:20.650 "num_base_bdevs": 4, 00:10:20.650 "num_base_bdevs_discovered": 4, 00:10:20.650 "num_base_bdevs_operational": 4, 00:10:20.650 "base_bdevs_list": [ 00:10:20.650 { 00:10:20.650 "name": "BaseBdev1", 00:10:20.650 "uuid": "e0af7e1d-1256-468d-afe9-b48ad620f3b5", 00:10:20.650 "is_configured": true, 00:10:20.650 "data_offset": 0, 00:10:20.650 "data_size": 65536 00:10:20.650 }, 00:10:20.650 { 00:10:20.650 "name": "BaseBdev2", 00:10:20.650 "uuid": "cd25fabe-e9fb-4a94-bee2-6e099c352258", 00:10:20.650 "is_configured": true, 00:10:20.650 "data_offset": 0, 00:10:20.650 "data_size": 65536 00:10:20.650 }, 00:10:20.650 { 00:10:20.650 "name": "BaseBdev3", 00:10:20.650 "uuid": "b289cf03-06be-4c09-8881-d577aa41162b", 00:10:20.650 "is_configured": true, 00:10:20.650 "data_offset": 0, 00:10:20.650 "data_size": 65536 00:10:20.650 }, 00:10:20.650 { 00:10:20.650 "name": "BaseBdev4", 00:10:20.650 "uuid": "7ed25e7d-5637-40d6-8598-139d62c9140e", 00:10:20.650 "is_configured": true, 00:10:20.650 "data_offset": 0, 00:10:20.650 "data_size": 65536 00:10:20.650 } 00:10:20.650 ] 00:10:20.650 } 00:10:20.650 } 00:10:20.650 }' 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:20.650 BaseBdev2 00:10:20.650 BaseBdev3 00:10:20.650 BaseBdev4' 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.650 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.923 [2024-12-07 16:36:19.651109] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:20.923 [2024-12-07 16:36:19.651215] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.923 [2024-12-07 16:36:19.651285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.923 "name": "Existed_Raid", 00:10:20.923 "uuid": "45449a1e-e0f5-458a-ace5-6d5d504e207e", 00:10:20.923 "strip_size_kb": 64, 00:10:20.923 "state": "offline", 00:10:20.923 "raid_level": "concat", 00:10:20.923 "superblock": false, 00:10:20.923 "num_base_bdevs": 4, 00:10:20.923 "num_base_bdevs_discovered": 3, 00:10:20.923 "num_base_bdevs_operational": 3, 00:10:20.923 "base_bdevs_list": [ 00:10:20.923 { 00:10:20.923 "name": null, 00:10:20.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.923 "is_configured": false, 00:10:20.923 "data_offset": 0, 00:10:20.923 "data_size": 65536 00:10:20.923 }, 00:10:20.923 { 00:10:20.923 "name": "BaseBdev2", 00:10:20.923 "uuid": "cd25fabe-e9fb-4a94-bee2-6e099c352258", 00:10:20.923 "is_configured": true, 00:10:20.923 "data_offset": 0, 00:10:20.923 "data_size": 65536 00:10:20.923 }, 00:10:20.923 { 00:10:20.923 "name": "BaseBdev3", 00:10:20.923 "uuid": "b289cf03-06be-4c09-8881-d577aa41162b", 00:10:20.923 "is_configured": true, 00:10:20.923 "data_offset": 0, 00:10:20.923 "data_size": 65536 00:10:20.923 }, 00:10:20.923 { 00:10:20.923 "name": "BaseBdev4", 00:10:20.923 "uuid": "7ed25e7d-5637-40d6-8598-139d62c9140e", 00:10:20.923 "is_configured": true, 00:10:20.923 "data_offset": 0, 00:10:20.923 "data_size": 65536 00:10:20.923 } 00:10:20.923 ] 00:10:20.923 }' 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.923 16:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.511 [2024-12-07 16:36:20.151801] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.511 [2024-12-07 16:36:20.232109] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.511 [2024-12-07 16:36:20.311792] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:21.511 [2024-12-07 16:36:20.311906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.511 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.812 BaseBdev2 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.812 [ 00:10:21.812 { 00:10:21.812 "name": "BaseBdev2", 00:10:21.812 "aliases": [ 00:10:21.812 "d6695576-a727-44fd-9911-18d9d98cacee" 00:10:21.812 ], 00:10:21.812 "product_name": "Malloc disk", 00:10:21.812 "block_size": 512, 00:10:21.812 "num_blocks": 65536, 00:10:21.812 "uuid": "d6695576-a727-44fd-9911-18d9d98cacee", 00:10:21.812 "assigned_rate_limits": { 00:10:21.812 "rw_ios_per_sec": 0, 00:10:21.812 "rw_mbytes_per_sec": 0, 00:10:21.812 "r_mbytes_per_sec": 0, 00:10:21.812 "w_mbytes_per_sec": 0 00:10:21.812 }, 00:10:21.812 "claimed": false, 00:10:21.812 "zoned": false, 00:10:21.812 "supported_io_types": { 00:10:21.812 "read": true, 00:10:21.812 "write": true, 00:10:21.812 "unmap": true, 00:10:21.812 "flush": true, 00:10:21.812 "reset": true, 00:10:21.812 "nvme_admin": false, 00:10:21.812 "nvme_io": false, 00:10:21.812 "nvme_io_md": false, 00:10:21.812 "write_zeroes": true, 00:10:21.812 "zcopy": true, 00:10:21.812 "get_zone_info": false, 00:10:21.812 "zone_management": false, 00:10:21.812 "zone_append": false, 00:10:21.812 "compare": false, 00:10:21.812 "compare_and_write": false, 00:10:21.812 "abort": true, 00:10:21.812 "seek_hole": false, 00:10:21.812 "seek_data": false, 00:10:21.812 "copy": true, 00:10:21.812 "nvme_iov_md": false 00:10:21.812 }, 00:10:21.812 "memory_domains": [ 00:10:21.812 { 00:10:21.812 "dma_device_id": "system", 00:10:21.812 "dma_device_type": 1 00:10:21.812 }, 00:10:21.812 { 00:10:21.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.812 "dma_device_type": 2 00:10:21.812 } 00:10:21.812 ], 00:10:21.812 "driver_specific": {} 00:10:21.812 } 00:10:21.812 ] 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.812 BaseBdev3 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.812 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.812 [ 00:10:21.812 { 00:10:21.812 "name": "BaseBdev3", 00:10:21.812 "aliases": [ 00:10:21.812 "02485068-aadd-45c0-869f-f19b79dbeab2" 00:10:21.812 ], 00:10:21.812 "product_name": "Malloc disk", 00:10:21.812 "block_size": 512, 00:10:21.812 "num_blocks": 65536, 00:10:21.812 "uuid": "02485068-aadd-45c0-869f-f19b79dbeab2", 00:10:21.812 "assigned_rate_limits": { 00:10:21.812 "rw_ios_per_sec": 0, 00:10:21.812 "rw_mbytes_per_sec": 0, 00:10:21.812 "r_mbytes_per_sec": 0, 00:10:21.812 "w_mbytes_per_sec": 0 00:10:21.812 }, 00:10:21.812 "claimed": false, 00:10:21.812 "zoned": false, 00:10:21.812 "supported_io_types": { 00:10:21.812 "read": true, 00:10:21.812 "write": true, 00:10:21.812 "unmap": true, 00:10:21.812 "flush": true, 00:10:21.812 "reset": true, 00:10:21.812 "nvme_admin": false, 00:10:21.812 "nvme_io": false, 00:10:21.812 "nvme_io_md": false, 00:10:21.812 "write_zeroes": true, 00:10:21.812 "zcopy": true, 00:10:21.812 "get_zone_info": false, 00:10:21.812 "zone_management": false, 00:10:21.812 "zone_append": false, 00:10:21.812 "compare": false, 00:10:21.812 "compare_and_write": false, 00:10:21.812 "abort": true, 00:10:21.812 "seek_hole": false, 00:10:21.812 "seek_data": false, 00:10:21.812 "copy": true, 00:10:21.812 "nvme_iov_md": false 00:10:21.812 }, 00:10:21.812 "memory_domains": [ 00:10:21.812 { 00:10:21.812 "dma_device_id": "system", 00:10:21.812 "dma_device_type": 1 00:10:21.812 }, 00:10:21.812 { 00:10:21.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.812 "dma_device_type": 2 00:10:21.812 } 00:10:21.812 ], 00:10:21.812 "driver_specific": {} 00:10:21.812 } 00:10:21.812 ] 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.813 BaseBdev4 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.813 [ 00:10:21.813 { 00:10:21.813 "name": "BaseBdev4", 00:10:21.813 "aliases": [ 00:10:21.813 "0d8d6731-0f89-48ca-a1b7-9142d6361c33" 00:10:21.813 ], 00:10:21.813 "product_name": "Malloc disk", 00:10:21.813 "block_size": 512, 00:10:21.813 "num_blocks": 65536, 00:10:21.813 "uuid": "0d8d6731-0f89-48ca-a1b7-9142d6361c33", 00:10:21.813 "assigned_rate_limits": { 00:10:21.813 "rw_ios_per_sec": 0, 00:10:21.813 "rw_mbytes_per_sec": 0, 00:10:21.813 "r_mbytes_per_sec": 0, 00:10:21.813 "w_mbytes_per_sec": 0 00:10:21.813 }, 00:10:21.813 "claimed": false, 00:10:21.813 "zoned": false, 00:10:21.813 "supported_io_types": { 00:10:21.813 "read": true, 00:10:21.813 "write": true, 00:10:21.813 "unmap": true, 00:10:21.813 "flush": true, 00:10:21.813 "reset": true, 00:10:21.813 "nvme_admin": false, 00:10:21.813 "nvme_io": false, 00:10:21.813 "nvme_io_md": false, 00:10:21.813 "write_zeroes": true, 00:10:21.813 "zcopy": true, 00:10:21.813 "get_zone_info": false, 00:10:21.813 "zone_management": false, 00:10:21.813 "zone_append": false, 00:10:21.813 "compare": false, 00:10:21.813 "compare_and_write": false, 00:10:21.813 "abort": true, 00:10:21.813 "seek_hole": false, 00:10:21.813 "seek_data": false, 00:10:21.813 "copy": true, 00:10:21.813 "nvme_iov_md": false 00:10:21.813 }, 00:10:21.813 "memory_domains": [ 00:10:21.813 { 00:10:21.813 "dma_device_id": "system", 00:10:21.813 "dma_device_type": 1 00:10:21.813 }, 00:10:21.813 { 00:10:21.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.813 "dma_device_type": 2 00:10:21.813 } 00:10:21.813 ], 00:10:21.813 "driver_specific": {} 00:10:21.813 } 00:10:21.813 ] 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.813 [2024-12-07 16:36:20.580819] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.813 [2024-12-07 16:36:20.580923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.813 [2024-12-07 16:36:20.580982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.813 [2024-12-07 16:36:20.583360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.813 [2024-12-07 16:36:20.583460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.813 "name": "Existed_Raid", 00:10:21.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.813 "strip_size_kb": 64, 00:10:21.813 "state": "configuring", 00:10:21.813 "raid_level": "concat", 00:10:21.813 "superblock": false, 00:10:21.813 "num_base_bdevs": 4, 00:10:21.813 "num_base_bdevs_discovered": 3, 00:10:21.813 "num_base_bdevs_operational": 4, 00:10:21.813 "base_bdevs_list": [ 00:10:21.813 { 00:10:21.813 "name": "BaseBdev1", 00:10:21.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.813 "is_configured": false, 00:10:21.813 "data_offset": 0, 00:10:21.813 "data_size": 0 00:10:21.813 }, 00:10:21.813 { 00:10:21.813 "name": "BaseBdev2", 00:10:21.813 "uuid": "d6695576-a727-44fd-9911-18d9d98cacee", 00:10:21.813 "is_configured": true, 00:10:21.813 "data_offset": 0, 00:10:21.813 "data_size": 65536 00:10:21.813 }, 00:10:21.813 { 00:10:21.813 "name": "BaseBdev3", 00:10:21.813 "uuid": "02485068-aadd-45c0-869f-f19b79dbeab2", 00:10:21.813 "is_configured": true, 00:10:21.813 "data_offset": 0, 00:10:21.813 "data_size": 65536 00:10:21.813 }, 00:10:21.813 { 00:10:21.813 "name": "BaseBdev4", 00:10:21.813 "uuid": "0d8d6731-0f89-48ca-a1b7-9142d6361c33", 00:10:21.813 "is_configured": true, 00:10:21.813 "data_offset": 0, 00:10:21.813 "data_size": 65536 00:10:21.813 } 00:10:21.813 ] 00:10:21.813 }' 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.813 16:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.384 [2024-12-07 16:36:21.032015] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.384 "name": "Existed_Raid", 00:10:22.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.384 "strip_size_kb": 64, 00:10:22.384 "state": "configuring", 00:10:22.384 "raid_level": "concat", 00:10:22.384 "superblock": false, 00:10:22.384 "num_base_bdevs": 4, 00:10:22.384 "num_base_bdevs_discovered": 2, 00:10:22.384 "num_base_bdevs_operational": 4, 00:10:22.384 "base_bdevs_list": [ 00:10:22.384 { 00:10:22.384 "name": "BaseBdev1", 00:10:22.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.384 "is_configured": false, 00:10:22.384 "data_offset": 0, 00:10:22.384 "data_size": 0 00:10:22.384 }, 00:10:22.384 { 00:10:22.384 "name": null, 00:10:22.384 "uuid": "d6695576-a727-44fd-9911-18d9d98cacee", 00:10:22.384 "is_configured": false, 00:10:22.384 "data_offset": 0, 00:10:22.384 "data_size": 65536 00:10:22.384 }, 00:10:22.384 { 00:10:22.384 "name": "BaseBdev3", 00:10:22.384 "uuid": "02485068-aadd-45c0-869f-f19b79dbeab2", 00:10:22.384 "is_configured": true, 00:10:22.384 "data_offset": 0, 00:10:22.384 "data_size": 65536 00:10:22.384 }, 00:10:22.384 { 00:10:22.384 "name": "BaseBdev4", 00:10:22.384 "uuid": "0d8d6731-0f89-48ca-a1b7-9142d6361c33", 00:10:22.384 "is_configured": true, 00:10:22.384 "data_offset": 0, 00:10:22.384 "data_size": 65536 00:10:22.384 } 00:10:22.384 ] 00:10:22.384 }' 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.384 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.645 [2024-12-07 16:36:21.472792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.645 BaseBdev1 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.645 [ 00:10:22.645 { 00:10:22.645 "name": "BaseBdev1", 00:10:22.645 "aliases": [ 00:10:22.645 "1b22cbfb-f7ed-4183-97bd-46a0ba150dbf" 00:10:22.645 ], 00:10:22.645 "product_name": "Malloc disk", 00:10:22.645 "block_size": 512, 00:10:22.645 "num_blocks": 65536, 00:10:22.645 "uuid": "1b22cbfb-f7ed-4183-97bd-46a0ba150dbf", 00:10:22.645 "assigned_rate_limits": { 00:10:22.645 "rw_ios_per_sec": 0, 00:10:22.645 "rw_mbytes_per_sec": 0, 00:10:22.645 "r_mbytes_per_sec": 0, 00:10:22.645 "w_mbytes_per_sec": 0 00:10:22.645 }, 00:10:22.645 "claimed": true, 00:10:22.645 "claim_type": "exclusive_write", 00:10:22.645 "zoned": false, 00:10:22.645 "supported_io_types": { 00:10:22.645 "read": true, 00:10:22.645 "write": true, 00:10:22.645 "unmap": true, 00:10:22.645 "flush": true, 00:10:22.645 "reset": true, 00:10:22.645 "nvme_admin": false, 00:10:22.645 "nvme_io": false, 00:10:22.645 "nvme_io_md": false, 00:10:22.645 "write_zeroes": true, 00:10:22.645 "zcopy": true, 00:10:22.645 "get_zone_info": false, 00:10:22.645 "zone_management": false, 00:10:22.645 "zone_append": false, 00:10:22.645 "compare": false, 00:10:22.645 "compare_and_write": false, 00:10:22.645 "abort": true, 00:10:22.645 "seek_hole": false, 00:10:22.645 "seek_data": false, 00:10:22.645 "copy": true, 00:10:22.645 "nvme_iov_md": false 00:10:22.645 }, 00:10:22.645 "memory_domains": [ 00:10:22.645 { 00:10:22.645 "dma_device_id": "system", 00:10:22.645 "dma_device_type": 1 00:10:22.645 }, 00:10:22.645 { 00:10:22.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.645 "dma_device_type": 2 00:10:22.645 } 00:10:22.645 ], 00:10:22.645 "driver_specific": {} 00:10:22.645 } 00:10:22.645 ] 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.645 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.905 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.905 "name": "Existed_Raid", 00:10:22.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.905 "strip_size_kb": 64, 00:10:22.905 "state": "configuring", 00:10:22.905 "raid_level": "concat", 00:10:22.905 "superblock": false, 00:10:22.905 "num_base_bdevs": 4, 00:10:22.905 "num_base_bdevs_discovered": 3, 00:10:22.905 "num_base_bdevs_operational": 4, 00:10:22.905 "base_bdevs_list": [ 00:10:22.905 { 00:10:22.905 "name": "BaseBdev1", 00:10:22.905 "uuid": "1b22cbfb-f7ed-4183-97bd-46a0ba150dbf", 00:10:22.905 "is_configured": true, 00:10:22.905 "data_offset": 0, 00:10:22.905 "data_size": 65536 00:10:22.905 }, 00:10:22.905 { 00:10:22.905 "name": null, 00:10:22.905 "uuid": "d6695576-a727-44fd-9911-18d9d98cacee", 00:10:22.905 "is_configured": false, 00:10:22.905 "data_offset": 0, 00:10:22.905 "data_size": 65536 00:10:22.905 }, 00:10:22.905 { 00:10:22.905 "name": "BaseBdev3", 00:10:22.905 "uuid": "02485068-aadd-45c0-869f-f19b79dbeab2", 00:10:22.905 "is_configured": true, 00:10:22.905 "data_offset": 0, 00:10:22.905 "data_size": 65536 00:10:22.905 }, 00:10:22.905 { 00:10:22.905 "name": "BaseBdev4", 00:10:22.905 "uuid": "0d8d6731-0f89-48ca-a1b7-9142d6361c33", 00:10:22.905 "is_configured": true, 00:10:22.905 "data_offset": 0, 00:10:22.905 "data_size": 65536 00:10:22.905 } 00:10:22.905 ] 00:10:22.905 }' 00:10:22.905 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.905 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.166 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.166 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:23.166 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.166 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.166 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.166 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:23.166 16:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:23.166 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.166 16:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.166 [2024-12-07 16:36:21.999960] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.166 "name": "Existed_Raid", 00:10:23.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.166 "strip_size_kb": 64, 00:10:23.166 "state": "configuring", 00:10:23.166 "raid_level": "concat", 00:10:23.166 "superblock": false, 00:10:23.166 "num_base_bdevs": 4, 00:10:23.166 "num_base_bdevs_discovered": 2, 00:10:23.166 "num_base_bdevs_operational": 4, 00:10:23.166 "base_bdevs_list": [ 00:10:23.166 { 00:10:23.166 "name": "BaseBdev1", 00:10:23.166 "uuid": "1b22cbfb-f7ed-4183-97bd-46a0ba150dbf", 00:10:23.166 "is_configured": true, 00:10:23.166 "data_offset": 0, 00:10:23.166 "data_size": 65536 00:10:23.166 }, 00:10:23.166 { 00:10:23.166 "name": null, 00:10:23.166 "uuid": "d6695576-a727-44fd-9911-18d9d98cacee", 00:10:23.166 "is_configured": false, 00:10:23.166 "data_offset": 0, 00:10:23.166 "data_size": 65536 00:10:23.166 }, 00:10:23.166 { 00:10:23.166 "name": null, 00:10:23.166 "uuid": "02485068-aadd-45c0-869f-f19b79dbeab2", 00:10:23.166 "is_configured": false, 00:10:23.166 "data_offset": 0, 00:10:23.166 "data_size": 65536 00:10:23.166 }, 00:10:23.166 { 00:10:23.166 "name": "BaseBdev4", 00:10:23.166 "uuid": "0d8d6731-0f89-48ca-a1b7-9142d6361c33", 00:10:23.166 "is_configured": true, 00:10:23.166 "data_offset": 0, 00:10:23.166 "data_size": 65536 00:10:23.166 } 00:10:23.166 ] 00:10:23.166 }' 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.166 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.736 [2024-12-07 16:36:22.451293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.736 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.736 "name": "Existed_Raid", 00:10:23.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.736 "strip_size_kb": 64, 00:10:23.736 "state": "configuring", 00:10:23.736 "raid_level": "concat", 00:10:23.736 "superblock": false, 00:10:23.736 "num_base_bdevs": 4, 00:10:23.736 "num_base_bdevs_discovered": 3, 00:10:23.736 "num_base_bdevs_operational": 4, 00:10:23.736 "base_bdevs_list": [ 00:10:23.736 { 00:10:23.737 "name": "BaseBdev1", 00:10:23.737 "uuid": "1b22cbfb-f7ed-4183-97bd-46a0ba150dbf", 00:10:23.737 "is_configured": true, 00:10:23.737 "data_offset": 0, 00:10:23.737 "data_size": 65536 00:10:23.737 }, 00:10:23.737 { 00:10:23.737 "name": null, 00:10:23.737 "uuid": "d6695576-a727-44fd-9911-18d9d98cacee", 00:10:23.737 "is_configured": false, 00:10:23.737 "data_offset": 0, 00:10:23.737 "data_size": 65536 00:10:23.737 }, 00:10:23.737 { 00:10:23.737 "name": "BaseBdev3", 00:10:23.737 "uuid": "02485068-aadd-45c0-869f-f19b79dbeab2", 00:10:23.737 "is_configured": true, 00:10:23.737 "data_offset": 0, 00:10:23.737 "data_size": 65536 00:10:23.737 }, 00:10:23.737 { 00:10:23.737 "name": "BaseBdev4", 00:10:23.737 "uuid": "0d8d6731-0f89-48ca-a1b7-9142d6361c33", 00:10:23.737 "is_configured": true, 00:10:23.737 "data_offset": 0, 00:10:23.737 "data_size": 65536 00:10:23.737 } 00:10:23.737 ] 00:10:23.737 }' 00:10:23.737 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.737 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.307 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.307 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:24.307 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.307 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.307 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.307 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:24.307 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:24.307 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.307 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.307 [2024-12-07 16:36:22.978492] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:24.307 16:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.307 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:24.307 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.307 16:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.307 "name": "Existed_Raid", 00:10:24.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.307 "strip_size_kb": 64, 00:10:24.307 "state": "configuring", 00:10:24.307 "raid_level": "concat", 00:10:24.307 "superblock": false, 00:10:24.307 "num_base_bdevs": 4, 00:10:24.307 "num_base_bdevs_discovered": 2, 00:10:24.307 "num_base_bdevs_operational": 4, 00:10:24.307 "base_bdevs_list": [ 00:10:24.307 { 00:10:24.307 "name": null, 00:10:24.307 "uuid": "1b22cbfb-f7ed-4183-97bd-46a0ba150dbf", 00:10:24.307 "is_configured": false, 00:10:24.307 "data_offset": 0, 00:10:24.307 "data_size": 65536 00:10:24.307 }, 00:10:24.307 { 00:10:24.307 "name": null, 00:10:24.307 "uuid": "d6695576-a727-44fd-9911-18d9d98cacee", 00:10:24.307 "is_configured": false, 00:10:24.307 "data_offset": 0, 00:10:24.307 "data_size": 65536 00:10:24.307 }, 00:10:24.307 { 00:10:24.307 "name": "BaseBdev3", 00:10:24.307 "uuid": "02485068-aadd-45c0-869f-f19b79dbeab2", 00:10:24.307 "is_configured": true, 00:10:24.307 "data_offset": 0, 00:10:24.307 "data_size": 65536 00:10:24.307 }, 00:10:24.307 { 00:10:24.307 "name": "BaseBdev4", 00:10:24.307 "uuid": "0d8d6731-0f89-48ca-a1b7-9142d6361c33", 00:10:24.307 "is_configured": true, 00:10:24.307 "data_offset": 0, 00:10:24.307 "data_size": 65536 00:10:24.307 } 00:10:24.307 ] 00:10:24.307 }' 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.307 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.568 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.568 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:24.568 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.568 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.828 [2024-12-07 16:36:23.501988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.828 "name": "Existed_Raid", 00:10:24.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.828 "strip_size_kb": 64, 00:10:24.828 "state": "configuring", 00:10:24.828 "raid_level": "concat", 00:10:24.828 "superblock": false, 00:10:24.828 "num_base_bdevs": 4, 00:10:24.828 "num_base_bdevs_discovered": 3, 00:10:24.828 "num_base_bdevs_operational": 4, 00:10:24.828 "base_bdevs_list": [ 00:10:24.828 { 00:10:24.828 "name": null, 00:10:24.828 "uuid": "1b22cbfb-f7ed-4183-97bd-46a0ba150dbf", 00:10:24.828 "is_configured": false, 00:10:24.828 "data_offset": 0, 00:10:24.828 "data_size": 65536 00:10:24.828 }, 00:10:24.828 { 00:10:24.828 "name": "BaseBdev2", 00:10:24.828 "uuid": "d6695576-a727-44fd-9911-18d9d98cacee", 00:10:24.828 "is_configured": true, 00:10:24.828 "data_offset": 0, 00:10:24.828 "data_size": 65536 00:10:24.828 }, 00:10:24.828 { 00:10:24.828 "name": "BaseBdev3", 00:10:24.828 "uuid": "02485068-aadd-45c0-869f-f19b79dbeab2", 00:10:24.828 "is_configured": true, 00:10:24.828 "data_offset": 0, 00:10:24.828 "data_size": 65536 00:10:24.828 }, 00:10:24.828 { 00:10:24.828 "name": "BaseBdev4", 00:10:24.828 "uuid": "0d8d6731-0f89-48ca-a1b7-9142d6361c33", 00:10:24.828 "is_configured": true, 00:10:24.828 "data_offset": 0, 00:10:24.828 "data_size": 65536 00:10:24.828 } 00:10:24.828 ] 00:10:24.828 }' 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.828 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.087 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.087 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.087 16:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:25.087 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.351 16:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1b22cbfb-f7ed-4183-97bd-46a0ba150dbf 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.351 [2024-12-07 16:36:24.093882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:25.351 [2024-12-07 16:36:24.093937] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:25.351 [2024-12-07 16:36:24.093945] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:25.351 [2024-12-07 16:36:24.094238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:25.351 [2024-12-07 16:36:24.094384] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:25.351 [2024-12-07 16:36:24.094420] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:25.351 [2024-12-07 16:36:24.094624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.351 NewBaseBdev 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.351 [ 00:10:25.351 { 00:10:25.351 "name": "NewBaseBdev", 00:10:25.351 "aliases": [ 00:10:25.351 "1b22cbfb-f7ed-4183-97bd-46a0ba150dbf" 00:10:25.351 ], 00:10:25.351 "product_name": "Malloc disk", 00:10:25.351 "block_size": 512, 00:10:25.351 "num_blocks": 65536, 00:10:25.351 "uuid": "1b22cbfb-f7ed-4183-97bd-46a0ba150dbf", 00:10:25.351 "assigned_rate_limits": { 00:10:25.351 "rw_ios_per_sec": 0, 00:10:25.351 "rw_mbytes_per_sec": 0, 00:10:25.351 "r_mbytes_per_sec": 0, 00:10:25.351 "w_mbytes_per_sec": 0 00:10:25.351 }, 00:10:25.351 "claimed": true, 00:10:25.351 "claim_type": "exclusive_write", 00:10:25.351 "zoned": false, 00:10:25.351 "supported_io_types": { 00:10:25.351 "read": true, 00:10:25.351 "write": true, 00:10:25.351 "unmap": true, 00:10:25.351 "flush": true, 00:10:25.351 "reset": true, 00:10:25.351 "nvme_admin": false, 00:10:25.351 "nvme_io": false, 00:10:25.351 "nvme_io_md": false, 00:10:25.351 "write_zeroes": true, 00:10:25.351 "zcopy": true, 00:10:25.351 "get_zone_info": false, 00:10:25.351 "zone_management": false, 00:10:25.351 "zone_append": false, 00:10:25.351 "compare": false, 00:10:25.351 "compare_and_write": false, 00:10:25.351 "abort": true, 00:10:25.351 "seek_hole": false, 00:10:25.351 "seek_data": false, 00:10:25.351 "copy": true, 00:10:25.351 "nvme_iov_md": false 00:10:25.351 }, 00:10:25.351 "memory_domains": [ 00:10:25.351 { 00:10:25.351 "dma_device_id": "system", 00:10:25.351 "dma_device_type": 1 00:10:25.351 }, 00:10:25.351 { 00:10:25.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.351 "dma_device_type": 2 00:10:25.351 } 00:10:25.351 ], 00:10:25.351 "driver_specific": {} 00:10:25.351 } 00:10:25.351 ] 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.351 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.351 "name": "Existed_Raid", 00:10:25.351 "uuid": "6d96703a-aa52-45ab-bc1c-921c38d1eb15", 00:10:25.351 "strip_size_kb": 64, 00:10:25.351 "state": "online", 00:10:25.351 "raid_level": "concat", 00:10:25.351 "superblock": false, 00:10:25.351 "num_base_bdevs": 4, 00:10:25.351 "num_base_bdevs_discovered": 4, 00:10:25.351 "num_base_bdevs_operational": 4, 00:10:25.351 "base_bdevs_list": [ 00:10:25.351 { 00:10:25.351 "name": "NewBaseBdev", 00:10:25.351 "uuid": "1b22cbfb-f7ed-4183-97bd-46a0ba150dbf", 00:10:25.351 "is_configured": true, 00:10:25.351 "data_offset": 0, 00:10:25.351 "data_size": 65536 00:10:25.351 }, 00:10:25.351 { 00:10:25.351 "name": "BaseBdev2", 00:10:25.352 "uuid": "d6695576-a727-44fd-9911-18d9d98cacee", 00:10:25.352 "is_configured": true, 00:10:25.352 "data_offset": 0, 00:10:25.352 "data_size": 65536 00:10:25.352 }, 00:10:25.352 { 00:10:25.352 "name": "BaseBdev3", 00:10:25.352 "uuid": "02485068-aadd-45c0-869f-f19b79dbeab2", 00:10:25.352 "is_configured": true, 00:10:25.352 "data_offset": 0, 00:10:25.352 "data_size": 65536 00:10:25.352 }, 00:10:25.352 { 00:10:25.352 "name": "BaseBdev4", 00:10:25.352 "uuid": "0d8d6731-0f89-48ca-a1b7-9142d6361c33", 00:10:25.352 "is_configured": true, 00:10:25.352 "data_offset": 0, 00:10:25.352 "data_size": 65536 00:10:25.352 } 00:10:25.352 ] 00:10:25.352 }' 00:10:25.352 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.352 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.919 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:25.919 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:25.919 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.919 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.919 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.919 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.919 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:25.919 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.919 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.919 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.919 [2024-12-07 16:36:24.601453] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.919 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.919 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.919 "name": "Existed_Raid", 00:10:25.919 "aliases": [ 00:10:25.919 "6d96703a-aa52-45ab-bc1c-921c38d1eb15" 00:10:25.919 ], 00:10:25.919 "product_name": "Raid Volume", 00:10:25.919 "block_size": 512, 00:10:25.919 "num_blocks": 262144, 00:10:25.919 "uuid": "6d96703a-aa52-45ab-bc1c-921c38d1eb15", 00:10:25.919 "assigned_rate_limits": { 00:10:25.919 "rw_ios_per_sec": 0, 00:10:25.919 "rw_mbytes_per_sec": 0, 00:10:25.919 "r_mbytes_per_sec": 0, 00:10:25.919 "w_mbytes_per_sec": 0 00:10:25.919 }, 00:10:25.919 "claimed": false, 00:10:25.919 "zoned": false, 00:10:25.919 "supported_io_types": { 00:10:25.919 "read": true, 00:10:25.919 "write": true, 00:10:25.919 "unmap": true, 00:10:25.919 "flush": true, 00:10:25.919 "reset": true, 00:10:25.919 "nvme_admin": false, 00:10:25.919 "nvme_io": false, 00:10:25.919 "nvme_io_md": false, 00:10:25.919 "write_zeroes": true, 00:10:25.920 "zcopy": false, 00:10:25.920 "get_zone_info": false, 00:10:25.920 "zone_management": false, 00:10:25.920 "zone_append": false, 00:10:25.920 "compare": false, 00:10:25.920 "compare_and_write": false, 00:10:25.920 "abort": false, 00:10:25.920 "seek_hole": false, 00:10:25.920 "seek_data": false, 00:10:25.920 "copy": false, 00:10:25.920 "nvme_iov_md": false 00:10:25.920 }, 00:10:25.920 "memory_domains": [ 00:10:25.920 { 00:10:25.920 "dma_device_id": "system", 00:10:25.920 "dma_device_type": 1 00:10:25.920 }, 00:10:25.920 { 00:10:25.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.920 "dma_device_type": 2 00:10:25.920 }, 00:10:25.920 { 00:10:25.920 "dma_device_id": "system", 00:10:25.920 "dma_device_type": 1 00:10:25.920 }, 00:10:25.920 { 00:10:25.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.920 "dma_device_type": 2 00:10:25.920 }, 00:10:25.920 { 00:10:25.920 "dma_device_id": "system", 00:10:25.920 "dma_device_type": 1 00:10:25.920 }, 00:10:25.920 { 00:10:25.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.920 "dma_device_type": 2 00:10:25.920 }, 00:10:25.920 { 00:10:25.920 "dma_device_id": "system", 00:10:25.920 "dma_device_type": 1 00:10:25.920 }, 00:10:25.920 { 00:10:25.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.920 "dma_device_type": 2 00:10:25.920 } 00:10:25.920 ], 00:10:25.920 "driver_specific": { 00:10:25.920 "raid": { 00:10:25.920 "uuid": "6d96703a-aa52-45ab-bc1c-921c38d1eb15", 00:10:25.920 "strip_size_kb": 64, 00:10:25.920 "state": "online", 00:10:25.920 "raid_level": "concat", 00:10:25.920 "superblock": false, 00:10:25.920 "num_base_bdevs": 4, 00:10:25.920 "num_base_bdevs_discovered": 4, 00:10:25.920 "num_base_bdevs_operational": 4, 00:10:25.920 "base_bdevs_list": [ 00:10:25.920 { 00:10:25.920 "name": "NewBaseBdev", 00:10:25.920 "uuid": "1b22cbfb-f7ed-4183-97bd-46a0ba150dbf", 00:10:25.920 "is_configured": true, 00:10:25.920 "data_offset": 0, 00:10:25.920 "data_size": 65536 00:10:25.920 }, 00:10:25.920 { 00:10:25.920 "name": "BaseBdev2", 00:10:25.920 "uuid": "d6695576-a727-44fd-9911-18d9d98cacee", 00:10:25.920 "is_configured": true, 00:10:25.920 "data_offset": 0, 00:10:25.920 "data_size": 65536 00:10:25.920 }, 00:10:25.920 { 00:10:25.920 "name": "BaseBdev3", 00:10:25.920 "uuid": "02485068-aadd-45c0-869f-f19b79dbeab2", 00:10:25.920 "is_configured": true, 00:10:25.920 "data_offset": 0, 00:10:25.920 "data_size": 65536 00:10:25.920 }, 00:10:25.920 { 00:10:25.920 "name": "BaseBdev4", 00:10:25.920 "uuid": "0d8d6731-0f89-48ca-a1b7-9142d6361c33", 00:10:25.920 "is_configured": true, 00:10:25.920 "data_offset": 0, 00:10:25.920 "data_size": 65536 00:10:25.920 } 00:10:25.920 ] 00:10:25.920 } 00:10:25.920 } 00:10:25.920 }' 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:25.920 BaseBdev2 00:10:25.920 BaseBdev3 00:10:25.920 BaseBdev4' 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.920 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.180 [2024-12-07 16:36:24.920476] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.180 [2024-12-07 16:36:24.920509] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.180 [2024-12-07 16:36:24.920599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.180 [2024-12-07 16:36:24.920683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.180 [2024-12-07 16:36:24.920694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82476 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82476 ']' 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82476 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82476 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:26.180 killing process with pid 82476 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82476' 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82476 00:10:26.180 [2024-12-07 16:36:24.970608] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.180 16:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82476 00:10:26.180 [2024-12-07 16:36:25.047291] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.755 ************************************ 00:10:26.755 END TEST raid_state_function_test 00:10:26.755 ************************************ 00:10:26.755 16:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:26.755 00:10:26.755 real 0m9.988s 00:10:26.755 user 0m16.689s 00:10:26.755 sys 0m2.274s 00:10:26.755 16:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.755 16:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.755 16:36:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:26.755 16:36:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:26.755 16:36:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.755 16:36:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:26.755 ************************************ 00:10:26.755 START TEST raid_state_function_test_sb 00:10:26.755 ************************************ 00:10:26.755 16:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:10:26.755 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83125 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83125' 00:10:26.756 Process raid pid: 83125 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83125 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83125 ']' 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.756 16:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.756 [2024-12-07 16:36:25.597832] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:26.756 [2024-12-07 16:36:25.598056] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.015 [2024-12-07 16:36:25.763154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.015 [2024-12-07 16:36:25.839309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.274 [2024-12-07 16:36:25.917623] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.274 [2024-12-07 16:36:25.917746] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.533 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:27.533 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:27.533 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:27.533 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.533 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.792 [2024-12-07 16:36:26.432847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.792 [2024-12-07 16:36:26.432918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.792 [2024-12-07 16:36:26.432939] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.792 [2024-12-07 16:36:26.432951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.792 [2024-12-07 16:36:26.432958] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.792 [2024-12-07 16:36:26.432973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.792 [2024-12-07 16:36:26.432979] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:27.792 [2024-12-07 16:36:26.432989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.792 "name": "Existed_Raid", 00:10:27.792 "uuid": "a27d9421-d092-48c7-9194-8bf5bce577cd", 00:10:27.792 "strip_size_kb": 64, 00:10:27.792 "state": "configuring", 00:10:27.792 "raid_level": "concat", 00:10:27.792 "superblock": true, 00:10:27.792 "num_base_bdevs": 4, 00:10:27.792 "num_base_bdevs_discovered": 0, 00:10:27.792 "num_base_bdevs_operational": 4, 00:10:27.792 "base_bdevs_list": [ 00:10:27.792 { 00:10:27.792 "name": "BaseBdev1", 00:10:27.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.792 "is_configured": false, 00:10:27.792 "data_offset": 0, 00:10:27.792 "data_size": 0 00:10:27.792 }, 00:10:27.792 { 00:10:27.792 "name": "BaseBdev2", 00:10:27.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.792 "is_configured": false, 00:10:27.792 "data_offset": 0, 00:10:27.792 "data_size": 0 00:10:27.792 }, 00:10:27.792 { 00:10:27.792 "name": "BaseBdev3", 00:10:27.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.792 "is_configured": false, 00:10:27.792 "data_offset": 0, 00:10:27.792 "data_size": 0 00:10:27.792 }, 00:10:27.792 { 00:10:27.792 "name": "BaseBdev4", 00:10:27.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.792 "is_configured": false, 00:10:27.792 "data_offset": 0, 00:10:27.792 "data_size": 0 00:10:27.792 } 00:10:27.792 ] 00:10:27.792 }' 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.792 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.052 [2024-12-07 16:36:26.867987] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.052 [2024-12-07 16:36:26.868078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.052 [2024-12-07 16:36:26.876022] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:28.052 [2024-12-07 16:36:26.876098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:28.052 [2024-12-07 16:36:26.876130] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:28.052 [2024-12-07 16:36:26.876171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:28.052 [2024-12-07 16:36:26.876198] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:28.052 [2024-12-07 16:36:26.876221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:28.052 [2024-12-07 16:36:26.876279] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:28.052 [2024-12-07 16:36:26.876301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.052 [2024-12-07 16:36:26.903084] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.052 BaseBdev1 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:28.052 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.053 [ 00:10:28.053 { 00:10:28.053 "name": "BaseBdev1", 00:10:28.053 "aliases": [ 00:10:28.053 "4570393e-5f33-42e2-a47f-09daab636cda" 00:10:28.053 ], 00:10:28.053 "product_name": "Malloc disk", 00:10:28.053 "block_size": 512, 00:10:28.053 "num_blocks": 65536, 00:10:28.053 "uuid": "4570393e-5f33-42e2-a47f-09daab636cda", 00:10:28.053 "assigned_rate_limits": { 00:10:28.053 "rw_ios_per_sec": 0, 00:10:28.053 "rw_mbytes_per_sec": 0, 00:10:28.053 "r_mbytes_per_sec": 0, 00:10:28.053 "w_mbytes_per_sec": 0 00:10:28.053 }, 00:10:28.053 "claimed": true, 00:10:28.053 "claim_type": "exclusive_write", 00:10:28.053 "zoned": false, 00:10:28.053 "supported_io_types": { 00:10:28.053 "read": true, 00:10:28.053 "write": true, 00:10:28.053 "unmap": true, 00:10:28.053 "flush": true, 00:10:28.053 "reset": true, 00:10:28.053 "nvme_admin": false, 00:10:28.053 "nvme_io": false, 00:10:28.053 "nvme_io_md": false, 00:10:28.053 "write_zeroes": true, 00:10:28.053 "zcopy": true, 00:10:28.053 "get_zone_info": false, 00:10:28.053 "zone_management": false, 00:10:28.053 "zone_append": false, 00:10:28.053 "compare": false, 00:10:28.053 "compare_and_write": false, 00:10:28.053 "abort": true, 00:10:28.053 "seek_hole": false, 00:10:28.053 "seek_data": false, 00:10:28.053 "copy": true, 00:10:28.053 "nvme_iov_md": false 00:10:28.053 }, 00:10:28.053 "memory_domains": [ 00:10:28.053 { 00:10:28.053 "dma_device_id": "system", 00:10:28.053 "dma_device_type": 1 00:10:28.053 }, 00:10:28.053 { 00:10:28.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.053 "dma_device_type": 2 00:10:28.053 } 00:10:28.053 ], 00:10:28.053 "driver_specific": {} 00:10:28.053 } 00:10:28.053 ] 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.053 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.313 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.313 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.313 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.313 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.313 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.313 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.313 "name": "Existed_Raid", 00:10:28.313 "uuid": "97ec2873-f2ba-4011-806b-76e7cd50800e", 00:10:28.313 "strip_size_kb": 64, 00:10:28.313 "state": "configuring", 00:10:28.313 "raid_level": "concat", 00:10:28.313 "superblock": true, 00:10:28.313 "num_base_bdevs": 4, 00:10:28.313 "num_base_bdevs_discovered": 1, 00:10:28.313 "num_base_bdevs_operational": 4, 00:10:28.313 "base_bdevs_list": [ 00:10:28.313 { 00:10:28.313 "name": "BaseBdev1", 00:10:28.313 "uuid": "4570393e-5f33-42e2-a47f-09daab636cda", 00:10:28.313 "is_configured": true, 00:10:28.313 "data_offset": 2048, 00:10:28.313 "data_size": 63488 00:10:28.313 }, 00:10:28.313 { 00:10:28.313 "name": "BaseBdev2", 00:10:28.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.313 "is_configured": false, 00:10:28.313 "data_offset": 0, 00:10:28.313 "data_size": 0 00:10:28.313 }, 00:10:28.313 { 00:10:28.313 "name": "BaseBdev3", 00:10:28.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.313 "is_configured": false, 00:10:28.313 "data_offset": 0, 00:10:28.313 "data_size": 0 00:10:28.313 }, 00:10:28.313 { 00:10:28.313 "name": "BaseBdev4", 00:10:28.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.313 "is_configured": false, 00:10:28.313 "data_offset": 0, 00:10:28.313 "data_size": 0 00:10:28.313 } 00:10:28.313 ] 00:10:28.313 }' 00:10:28.313 16:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.313 16:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.573 [2024-12-07 16:36:27.378432] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.573 [2024-12-07 16:36:27.378564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.573 [2024-12-07 16:36:27.390440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.573 [2024-12-07 16:36:27.392654] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:28.573 [2024-12-07 16:36:27.392692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:28.573 [2024-12-07 16:36:27.392702] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:28.573 [2024-12-07 16:36:27.392710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:28.573 [2024-12-07 16:36:27.392716] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:28.573 [2024-12-07 16:36:27.392724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.573 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.574 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.574 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.574 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.574 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.574 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.574 "name": "Existed_Raid", 00:10:28.574 "uuid": "3de64bcc-8b06-40d2-bca2-d48464683b46", 00:10:28.574 "strip_size_kb": 64, 00:10:28.574 "state": "configuring", 00:10:28.574 "raid_level": "concat", 00:10:28.574 "superblock": true, 00:10:28.574 "num_base_bdevs": 4, 00:10:28.574 "num_base_bdevs_discovered": 1, 00:10:28.574 "num_base_bdevs_operational": 4, 00:10:28.574 "base_bdevs_list": [ 00:10:28.574 { 00:10:28.574 "name": "BaseBdev1", 00:10:28.574 "uuid": "4570393e-5f33-42e2-a47f-09daab636cda", 00:10:28.574 "is_configured": true, 00:10:28.574 "data_offset": 2048, 00:10:28.574 "data_size": 63488 00:10:28.574 }, 00:10:28.574 { 00:10:28.574 "name": "BaseBdev2", 00:10:28.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.574 "is_configured": false, 00:10:28.574 "data_offset": 0, 00:10:28.574 "data_size": 0 00:10:28.574 }, 00:10:28.574 { 00:10:28.574 "name": "BaseBdev3", 00:10:28.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.574 "is_configured": false, 00:10:28.574 "data_offset": 0, 00:10:28.574 "data_size": 0 00:10:28.574 }, 00:10:28.574 { 00:10:28.574 "name": "BaseBdev4", 00:10:28.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.574 "is_configured": false, 00:10:28.574 "data_offset": 0, 00:10:28.574 "data_size": 0 00:10:28.574 } 00:10:28.574 ] 00:10:28.574 }' 00:10:28.574 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.574 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.143 [2024-12-07 16:36:27.887521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.143 BaseBdev2 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.143 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.143 [ 00:10:29.143 { 00:10:29.143 "name": "BaseBdev2", 00:10:29.143 "aliases": [ 00:10:29.143 "9484d421-c477-49a3-90f6-943d183009d2" 00:10:29.143 ], 00:10:29.143 "product_name": "Malloc disk", 00:10:29.143 "block_size": 512, 00:10:29.144 "num_blocks": 65536, 00:10:29.144 "uuid": "9484d421-c477-49a3-90f6-943d183009d2", 00:10:29.144 "assigned_rate_limits": { 00:10:29.144 "rw_ios_per_sec": 0, 00:10:29.144 "rw_mbytes_per_sec": 0, 00:10:29.144 "r_mbytes_per_sec": 0, 00:10:29.144 "w_mbytes_per_sec": 0 00:10:29.144 }, 00:10:29.144 "claimed": true, 00:10:29.144 "claim_type": "exclusive_write", 00:10:29.144 "zoned": false, 00:10:29.144 "supported_io_types": { 00:10:29.144 "read": true, 00:10:29.144 "write": true, 00:10:29.144 "unmap": true, 00:10:29.144 "flush": true, 00:10:29.144 "reset": true, 00:10:29.144 "nvme_admin": false, 00:10:29.144 "nvme_io": false, 00:10:29.144 "nvme_io_md": false, 00:10:29.144 "write_zeroes": true, 00:10:29.144 "zcopy": true, 00:10:29.144 "get_zone_info": false, 00:10:29.144 "zone_management": false, 00:10:29.144 "zone_append": false, 00:10:29.144 "compare": false, 00:10:29.144 "compare_and_write": false, 00:10:29.144 "abort": true, 00:10:29.144 "seek_hole": false, 00:10:29.144 "seek_data": false, 00:10:29.144 "copy": true, 00:10:29.144 "nvme_iov_md": false 00:10:29.144 }, 00:10:29.144 "memory_domains": [ 00:10:29.144 { 00:10:29.144 "dma_device_id": "system", 00:10:29.144 "dma_device_type": 1 00:10:29.144 }, 00:10:29.144 { 00:10:29.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.144 "dma_device_type": 2 00:10:29.144 } 00:10:29.144 ], 00:10:29.144 "driver_specific": {} 00:10:29.144 } 00:10:29.144 ] 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.144 "name": "Existed_Raid", 00:10:29.144 "uuid": "3de64bcc-8b06-40d2-bca2-d48464683b46", 00:10:29.144 "strip_size_kb": 64, 00:10:29.144 "state": "configuring", 00:10:29.144 "raid_level": "concat", 00:10:29.144 "superblock": true, 00:10:29.144 "num_base_bdevs": 4, 00:10:29.144 "num_base_bdevs_discovered": 2, 00:10:29.144 "num_base_bdevs_operational": 4, 00:10:29.144 "base_bdevs_list": [ 00:10:29.144 { 00:10:29.144 "name": "BaseBdev1", 00:10:29.144 "uuid": "4570393e-5f33-42e2-a47f-09daab636cda", 00:10:29.144 "is_configured": true, 00:10:29.144 "data_offset": 2048, 00:10:29.144 "data_size": 63488 00:10:29.144 }, 00:10:29.144 { 00:10:29.144 "name": "BaseBdev2", 00:10:29.144 "uuid": "9484d421-c477-49a3-90f6-943d183009d2", 00:10:29.144 "is_configured": true, 00:10:29.144 "data_offset": 2048, 00:10:29.144 "data_size": 63488 00:10:29.144 }, 00:10:29.144 { 00:10:29.144 "name": "BaseBdev3", 00:10:29.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.144 "is_configured": false, 00:10:29.144 "data_offset": 0, 00:10:29.144 "data_size": 0 00:10:29.144 }, 00:10:29.144 { 00:10:29.144 "name": "BaseBdev4", 00:10:29.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.144 "is_configured": false, 00:10:29.144 "data_offset": 0, 00:10:29.144 "data_size": 0 00:10:29.144 } 00:10:29.144 ] 00:10:29.144 }' 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.144 16:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.714 [2024-12-07 16:36:28.367875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.714 BaseBdev3 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.714 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.714 [ 00:10:29.714 { 00:10:29.714 "name": "BaseBdev3", 00:10:29.714 "aliases": [ 00:10:29.714 "b905ecd6-e424-4d05-b4ec-7aab0826f756" 00:10:29.714 ], 00:10:29.714 "product_name": "Malloc disk", 00:10:29.714 "block_size": 512, 00:10:29.714 "num_blocks": 65536, 00:10:29.714 "uuid": "b905ecd6-e424-4d05-b4ec-7aab0826f756", 00:10:29.714 "assigned_rate_limits": { 00:10:29.714 "rw_ios_per_sec": 0, 00:10:29.714 "rw_mbytes_per_sec": 0, 00:10:29.714 "r_mbytes_per_sec": 0, 00:10:29.714 "w_mbytes_per_sec": 0 00:10:29.714 }, 00:10:29.714 "claimed": true, 00:10:29.714 "claim_type": "exclusive_write", 00:10:29.714 "zoned": false, 00:10:29.714 "supported_io_types": { 00:10:29.714 "read": true, 00:10:29.714 "write": true, 00:10:29.714 "unmap": true, 00:10:29.714 "flush": true, 00:10:29.714 "reset": true, 00:10:29.714 "nvme_admin": false, 00:10:29.714 "nvme_io": false, 00:10:29.714 "nvme_io_md": false, 00:10:29.714 "write_zeroes": true, 00:10:29.714 "zcopy": true, 00:10:29.714 "get_zone_info": false, 00:10:29.714 "zone_management": false, 00:10:29.714 "zone_append": false, 00:10:29.714 "compare": false, 00:10:29.714 "compare_and_write": false, 00:10:29.714 "abort": true, 00:10:29.714 "seek_hole": false, 00:10:29.714 "seek_data": false, 00:10:29.714 "copy": true, 00:10:29.714 "nvme_iov_md": false 00:10:29.714 }, 00:10:29.714 "memory_domains": [ 00:10:29.714 { 00:10:29.714 "dma_device_id": "system", 00:10:29.714 "dma_device_type": 1 00:10:29.714 }, 00:10:29.714 { 00:10:29.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.714 "dma_device_type": 2 00:10:29.714 } 00:10:29.714 ], 00:10:29.715 "driver_specific": {} 00:10:29.715 } 00:10:29.715 ] 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.715 "name": "Existed_Raid", 00:10:29.715 "uuid": "3de64bcc-8b06-40d2-bca2-d48464683b46", 00:10:29.715 "strip_size_kb": 64, 00:10:29.715 "state": "configuring", 00:10:29.715 "raid_level": "concat", 00:10:29.715 "superblock": true, 00:10:29.715 "num_base_bdevs": 4, 00:10:29.715 "num_base_bdevs_discovered": 3, 00:10:29.715 "num_base_bdevs_operational": 4, 00:10:29.715 "base_bdevs_list": [ 00:10:29.715 { 00:10:29.715 "name": "BaseBdev1", 00:10:29.715 "uuid": "4570393e-5f33-42e2-a47f-09daab636cda", 00:10:29.715 "is_configured": true, 00:10:29.715 "data_offset": 2048, 00:10:29.715 "data_size": 63488 00:10:29.715 }, 00:10:29.715 { 00:10:29.715 "name": "BaseBdev2", 00:10:29.715 "uuid": "9484d421-c477-49a3-90f6-943d183009d2", 00:10:29.715 "is_configured": true, 00:10:29.715 "data_offset": 2048, 00:10:29.715 "data_size": 63488 00:10:29.715 }, 00:10:29.715 { 00:10:29.715 "name": "BaseBdev3", 00:10:29.715 "uuid": "b905ecd6-e424-4d05-b4ec-7aab0826f756", 00:10:29.715 "is_configured": true, 00:10:29.715 "data_offset": 2048, 00:10:29.715 "data_size": 63488 00:10:29.715 }, 00:10:29.715 { 00:10:29.715 "name": "BaseBdev4", 00:10:29.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.715 "is_configured": false, 00:10:29.715 "data_offset": 0, 00:10:29.715 "data_size": 0 00:10:29.715 } 00:10:29.715 ] 00:10:29.715 }' 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.715 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.288 [2024-12-07 16:36:28.900491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:30.288 [2024-12-07 16:36:28.900738] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:30.288 [2024-12-07 16:36:28.900763] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:30.288 [2024-12-07 16:36:28.901089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:30.288 [2024-12-07 16:36:28.901260] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:30.288 [2024-12-07 16:36:28.901274] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:30.288 [2024-12-07 16:36:28.901419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.288 BaseBdev4 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.288 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.288 [ 00:10:30.288 { 00:10:30.288 "name": "BaseBdev4", 00:10:30.288 "aliases": [ 00:10:30.288 "e74dfae9-11f6-41e0-a84f-a3f55556d67f" 00:10:30.288 ], 00:10:30.288 "product_name": "Malloc disk", 00:10:30.288 "block_size": 512, 00:10:30.288 "num_blocks": 65536, 00:10:30.288 "uuid": "e74dfae9-11f6-41e0-a84f-a3f55556d67f", 00:10:30.288 "assigned_rate_limits": { 00:10:30.288 "rw_ios_per_sec": 0, 00:10:30.288 "rw_mbytes_per_sec": 0, 00:10:30.288 "r_mbytes_per_sec": 0, 00:10:30.288 "w_mbytes_per_sec": 0 00:10:30.288 }, 00:10:30.288 "claimed": true, 00:10:30.288 "claim_type": "exclusive_write", 00:10:30.288 "zoned": false, 00:10:30.288 "supported_io_types": { 00:10:30.288 "read": true, 00:10:30.288 "write": true, 00:10:30.288 "unmap": true, 00:10:30.289 "flush": true, 00:10:30.289 "reset": true, 00:10:30.289 "nvme_admin": false, 00:10:30.289 "nvme_io": false, 00:10:30.289 "nvme_io_md": false, 00:10:30.289 "write_zeroes": true, 00:10:30.289 "zcopy": true, 00:10:30.289 "get_zone_info": false, 00:10:30.289 "zone_management": false, 00:10:30.289 "zone_append": false, 00:10:30.289 "compare": false, 00:10:30.289 "compare_and_write": false, 00:10:30.289 "abort": true, 00:10:30.289 "seek_hole": false, 00:10:30.289 "seek_data": false, 00:10:30.289 "copy": true, 00:10:30.289 "nvme_iov_md": false 00:10:30.289 }, 00:10:30.289 "memory_domains": [ 00:10:30.289 { 00:10:30.289 "dma_device_id": "system", 00:10:30.289 "dma_device_type": 1 00:10:30.289 }, 00:10:30.289 { 00:10:30.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.289 "dma_device_type": 2 00:10:30.289 } 00:10:30.289 ], 00:10:30.289 "driver_specific": {} 00:10:30.289 } 00:10:30.289 ] 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.289 "name": "Existed_Raid", 00:10:30.289 "uuid": "3de64bcc-8b06-40d2-bca2-d48464683b46", 00:10:30.289 "strip_size_kb": 64, 00:10:30.289 "state": "online", 00:10:30.289 "raid_level": "concat", 00:10:30.289 "superblock": true, 00:10:30.289 "num_base_bdevs": 4, 00:10:30.289 "num_base_bdevs_discovered": 4, 00:10:30.289 "num_base_bdevs_operational": 4, 00:10:30.289 "base_bdevs_list": [ 00:10:30.289 { 00:10:30.289 "name": "BaseBdev1", 00:10:30.289 "uuid": "4570393e-5f33-42e2-a47f-09daab636cda", 00:10:30.289 "is_configured": true, 00:10:30.289 "data_offset": 2048, 00:10:30.289 "data_size": 63488 00:10:30.289 }, 00:10:30.289 { 00:10:30.289 "name": "BaseBdev2", 00:10:30.289 "uuid": "9484d421-c477-49a3-90f6-943d183009d2", 00:10:30.289 "is_configured": true, 00:10:30.289 "data_offset": 2048, 00:10:30.289 "data_size": 63488 00:10:30.289 }, 00:10:30.289 { 00:10:30.289 "name": "BaseBdev3", 00:10:30.289 "uuid": "b905ecd6-e424-4d05-b4ec-7aab0826f756", 00:10:30.289 "is_configured": true, 00:10:30.289 "data_offset": 2048, 00:10:30.289 "data_size": 63488 00:10:30.289 }, 00:10:30.289 { 00:10:30.289 "name": "BaseBdev4", 00:10:30.289 "uuid": "e74dfae9-11f6-41e0-a84f-a3f55556d67f", 00:10:30.289 "is_configured": true, 00:10:30.289 "data_offset": 2048, 00:10:30.289 "data_size": 63488 00:10:30.289 } 00:10:30.289 ] 00:10:30.289 }' 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.289 16:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.559 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:30.559 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:30.559 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:30.559 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:30.559 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:30.559 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:30.560 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:30.560 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.560 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.560 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:30.560 [2024-12-07 16:36:29.368137] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.560 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.560 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:30.560 "name": "Existed_Raid", 00:10:30.560 "aliases": [ 00:10:30.560 "3de64bcc-8b06-40d2-bca2-d48464683b46" 00:10:30.560 ], 00:10:30.560 "product_name": "Raid Volume", 00:10:30.560 "block_size": 512, 00:10:30.560 "num_blocks": 253952, 00:10:30.560 "uuid": "3de64bcc-8b06-40d2-bca2-d48464683b46", 00:10:30.560 "assigned_rate_limits": { 00:10:30.560 "rw_ios_per_sec": 0, 00:10:30.560 "rw_mbytes_per_sec": 0, 00:10:30.560 "r_mbytes_per_sec": 0, 00:10:30.560 "w_mbytes_per_sec": 0 00:10:30.560 }, 00:10:30.560 "claimed": false, 00:10:30.560 "zoned": false, 00:10:30.560 "supported_io_types": { 00:10:30.560 "read": true, 00:10:30.560 "write": true, 00:10:30.560 "unmap": true, 00:10:30.560 "flush": true, 00:10:30.560 "reset": true, 00:10:30.560 "nvme_admin": false, 00:10:30.560 "nvme_io": false, 00:10:30.560 "nvme_io_md": false, 00:10:30.560 "write_zeroes": true, 00:10:30.560 "zcopy": false, 00:10:30.560 "get_zone_info": false, 00:10:30.560 "zone_management": false, 00:10:30.560 "zone_append": false, 00:10:30.560 "compare": false, 00:10:30.560 "compare_and_write": false, 00:10:30.560 "abort": false, 00:10:30.560 "seek_hole": false, 00:10:30.560 "seek_data": false, 00:10:30.560 "copy": false, 00:10:30.560 "nvme_iov_md": false 00:10:30.560 }, 00:10:30.560 "memory_domains": [ 00:10:30.560 { 00:10:30.560 "dma_device_id": "system", 00:10:30.560 "dma_device_type": 1 00:10:30.560 }, 00:10:30.560 { 00:10:30.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.560 "dma_device_type": 2 00:10:30.560 }, 00:10:30.560 { 00:10:30.560 "dma_device_id": "system", 00:10:30.560 "dma_device_type": 1 00:10:30.560 }, 00:10:30.560 { 00:10:30.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.560 "dma_device_type": 2 00:10:30.560 }, 00:10:30.560 { 00:10:30.560 "dma_device_id": "system", 00:10:30.560 "dma_device_type": 1 00:10:30.560 }, 00:10:30.560 { 00:10:30.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.560 "dma_device_type": 2 00:10:30.560 }, 00:10:30.560 { 00:10:30.560 "dma_device_id": "system", 00:10:30.560 "dma_device_type": 1 00:10:30.560 }, 00:10:30.560 { 00:10:30.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.560 "dma_device_type": 2 00:10:30.560 } 00:10:30.560 ], 00:10:30.560 "driver_specific": { 00:10:30.560 "raid": { 00:10:30.560 "uuid": "3de64bcc-8b06-40d2-bca2-d48464683b46", 00:10:30.560 "strip_size_kb": 64, 00:10:30.560 "state": "online", 00:10:30.560 "raid_level": "concat", 00:10:30.560 "superblock": true, 00:10:30.560 "num_base_bdevs": 4, 00:10:30.560 "num_base_bdevs_discovered": 4, 00:10:30.560 "num_base_bdevs_operational": 4, 00:10:30.560 "base_bdevs_list": [ 00:10:30.560 { 00:10:30.560 "name": "BaseBdev1", 00:10:30.560 "uuid": "4570393e-5f33-42e2-a47f-09daab636cda", 00:10:30.560 "is_configured": true, 00:10:30.560 "data_offset": 2048, 00:10:30.560 "data_size": 63488 00:10:30.560 }, 00:10:30.560 { 00:10:30.560 "name": "BaseBdev2", 00:10:30.560 "uuid": "9484d421-c477-49a3-90f6-943d183009d2", 00:10:30.560 "is_configured": true, 00:10:30.560 "data_offset": 2048, 00:10:30.560 "data_size": 63488 00:10:30.560 }, 00:10:30.560 { 00:10:30.560 "name": "BaseBdev3", 00:10:30.560 "uuid": "b905ecd6-e424-4d05-b4ec-7aab0826f756", 00:10:30.560 "is_configured": true, 00:10:30.560 "data_offset": 2048, 00:10:30.560 "data_size": 63488 00:10:30.560 }, 00:10:30.560 { 00:10:30.560 "name": "BaseBdev4", 00:10:30.560 "uuid": "e74dfae9-11f6-41e0-a84f-a3f55556d67f", 00:10:30.560 "is_configured": true, 00:10:30.560 "data_offset": 2048, 00:10:30.560 "data_size": 63488 00:10:30.560 } 00:10:30.560 ] 00:10:30.560 } 00:10:30.560 } 00:10:30.560 }' 00:10:30.560 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.560 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:30.560 BaseBdev2 00:10:30.560 BaseBdev3 00:10:30.560 BaseBdev4' 00:10:30.560 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.820 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:30.820 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.820 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:30.820 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.820 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.820 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.820 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.820 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.820 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.820 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.820 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:30.820 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.821 [2024-12-07 16:36:29.659288] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:30.821 [2024-12-07 16:36:29.659378] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.821 [2024-12-07 16:36:29.659468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.821 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.081 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.081 "name": "Existed_Raid", 00:10:31.081 "uuid": "3de64bcc-8b06-40d2-bca2-d48464683b46", 00:10:31.081 "strip_size_kb": 64, 00:10:31.081 "state": "offline", 00:10:31.081 "raid_level": "concat", 00:10:31.081 "superblock": true, 00:10:31.081 "num_base_bdevs": 4, 00:10:31.081 "num_base_bdevs_discovered": 3, 00:10:31.081 "num_base_bdevs_operational": 3, 00:10:31.081 "base_bdevs_list": [ 00:10:31.081 { 00:10:31.081 "name": null, 00:10:31.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.081 "is_configured": false, 00:10:31.081 "data_offset": 0, 00:10:31.081 "data_size": 63488 00:10:31.081 }, 00:10:31.081 { 00:10:31.081 "name": "BaseBdev2", 00:10:31.081 "uuid": "9484d421-c477-49a3-90f6-943d183009d2", 00:10:31.081 "is_configured": true, 00:10:31.081 "data_offset": 2048, 00:10:31.081 "data_size": 63488 00:10:31.081 }, 00:10:31.081 { 00:10:31.081 "name": "BaseBdev3", 00:10:31.081 "uuid": "b905ecd6-e424-4d05-b4ec-7aab0826f756", 00:10:31.081 "is_configured": true, 00:10:31.081 "data_offset": 2048, 00:10:31.081 "data_size": 63488 00:10:31.081 }, 00:10:31.081 { 00:10:31.081 "name": "BaseBdev4", 00:10:31.081 "uuid": "e74dfae9-11f6-41e0-a84f-a3f55556d67f", 00:10:31.081 "is_configured": true, 00:10:31.081 "data_offset": 2048, 00:10:31.081 "data_size": 63488 00:10:31.081 } 00:10:31.081 ] 00:10:31.081 }' 00:10:31.081 16:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.081 16:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.340 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:31.340 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:31.340 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.340 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:31.340 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.340 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.340 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.340 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:31.340 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:31.340 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:31.340 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.340 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.340 [2024-12-07 16:36:30.175615] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:31.340 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.341 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:31.341 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:31.341 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.341 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:31.341 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.341 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.341 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.601 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.602 [2024-12-07 16:36:30.252746] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.602 [2024-12-07 16:36:30.333852] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:31.602 [2024-12-07 16:36:30.333980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.602 BaseBdev2 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.602 [ 00:10:31.602 { 00:10:31.602 "name": "BaseBdev2", 00:10:31.602 "aliases": [ 00:10:31.602 "6aa35656-5ecc-4090-9398-7da8193e44e0" 00:10:31.602 ], 00:10:31.602 "product_name": "Malloc disk", 00:10:31.602 "block_size": 512, 00:10:31.602 "num_blocks": 65536, 00:10:31.602 "uuid": "6aa35656-5ecc-4090-9398-7da8193e44e0", 00:10:31.602 "assigned_rate_limits": { 00:10:31.602 "rw_ios_per_sec": 0, 00:10:31.602 "rw_mbytes_per_sec": 0, 00:10:31.602 "r_mbytes_per_sec": 0, 00:10:31.602 "w_mbytes_per_sec": 0 00:10:31.602 }, 00:10:31.602 "claimed": false, 00:10:31.602 "zoned": false, 00:10:31.602 "supported_io_types": { 00:10:31.602 "read": true, 00:10:31.602 "write": true, 00:10:31.602 "unmap": true, 00:10:31.602 "flush": true, 00:10:31.602 "reset": true, 00:10:31.602 "nvme_admin": false, 00:10:31.602 "nvme_io": false, 00:10:31.602 "nvme_io_md": false, 00:10:31.602 "write_zeroes": true, 00:10:31.602 "zcopy": true, 00:10:31.602 "get_zone_info": false, 00:10:31.602 "zone_management": false, 00:10:31.602 "zone_append": false, 00:10:31.602 "compare": false, 00:10:31.602 "compare_and_write": false, 00:10:31.602 "abort": true, 00:10:31.602 "seek_hole": false, 00:10:31.602 "seek_data": false, 00:10:31.602 "copy": true, 00:10:31.602 "nvme_iov_md": false 00:10:31.602 }, 00:10:31.602 "memory_domains": [ 00:10:31.602 { 00:10:31.602 "dma_device_id": "system", 00:10:31.602 "dma_device_type": 1 00:10:31.602 }, 00:10:31.602 { 00:10:31.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.602 "dma_device_type": 2 00:10:31.602 } 00:10:31.602 ], 00:10:31.602 "driver_specific": {} 00:10:31.602 } 00:10:31.602 ] 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.602 BaseBdev3 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.602 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.864 [ 00:10:31.864 { 00:10:31.864 "name": "BaseBdev3", 00:10:31.864 "aliases": [ 00:10:31.864 "f4652297-b6ef-4d22-88dd-4fa07eab5d6a" 00:10:31.864 ], 00:10:31.864 "product_name": "Malloc disk", 00:10:31.864 "block_size": 512, 00:10:31.864 "num_blocks": 65536, 00:10:31.864 "uuid": "f4652297-b6ef-4d22-88dd-4fa07eab5d6a", 00:10:31.864 "assigned_rate_limits": { 00:10:31.864 "rw_ios_per_sec": 0, 00:10:31.864 "rw_mbytes_per_sec": 0, 00:10:31.864 "r_mbytes_per_sec": 0, 00:10:31.864 "w_mbytes_per_sec": 0 00:10:31.864 }, 00:10:31.864 "claimed": false, 00:10:31.864 "zoned": false, 00:10:31.864 "supported_io_types": { 00:10:31.864 "read": true, 00:10:31.864 "write": true, 00:10:31.864 "unmap": true, 00:10:31.864 "flush": true, 00:10:31.864 "reset": true, 00:10:31.864 "nvme_admin": false, 00:10:31.864 "nvme_io": false, 00:10:31.864 "nvme_io_md": false, 00:10:31.864 "write_zeroes": true, 00:10:31.864 "zcopy": true, 00:10:31.864 "get_zone_info": false, 00:10:31.864 "zone_management": false, 00:10:31.864 "zone_append": false, 00:10:31.864 "compare": false, 00:10:31.864 "compare_and_write": false, 00:10:31.864 "abort": true, 00:10:31.864 "seek_hole": false, 00:10:31.864 "seek_data": false, 00:10:31.864 "copy": true, 00:10:31.864 "nvme_iov_md": false 00:10:31.864 }, 00:10:31.864 "memory_domains": [ 00:10:31.864 { 00:10:31.864 "dma_device_id": "system", 00:10:31.864 "dma_device_type": 1 00:10:31.864 }, 00:10:31.864 { 00:10:31.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.864 "dma_device_type": 2 00:10:31.864 } 00:10:31.864 ], 00:10:31.864 "driver_specific": {} 00:10:31.864 } 00:10:31.864 ] 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.864 BaseBdev4 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.864 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.864 [ 00:10:31.864 { 00:10:31.864 "name": "BaseBdev4", 00:10:31.864 "aliases": [ 00:10:31.864 "6ff2bdf6-cef2-43bd-bd4f-b079f759c107" 00:10:31.864 ], 00:10:31.864 "product_name": "Malloc disk", 00:10:31.864 "block_size": 512, 00:10:31.864 "num_blocks": 65536, 00:10:31.864 "uuid": "6ff2bdf6-cef2-43bd-bd4f-b079f759c107", 00:10:31.864 "assigned_rate_limits": { 00:10:31.864 "rw_ios_per_sec": 0, 00:10:31.864 "rw_mbytes_per_sec": 0, 00:10:31.865 "r_mbytes_per_sec": 0, 00:10:31.865 "w_mbytes_per_sec": 0 00:10:31.865 }, 00:10:31.865 "claimed": false, 00:10:31.865 "zoned": false, 00:10:31.865 "supported_io_types": { 00:10:31.865 "read": true, 00:10:31.865 "write": true, 00:10:31.865 "unmap": true, 00:10:31.865 "flush": true, 00:10:31.865 "reset": true, 00:10:31.865 "nvme_admin": false, 00:10:31.865 "nvme_io": false, 00:10:31.865 "nvme_io_md": false, 00:10:31.865 "write_zeroes": true, 00:10:31.865 "zcopy": true, 00:10:31.865 "get_zone_info": false, 00:10:31.865 "zone_management": false, 00:10:31.865 "zone_append": false, 00:10:31.865 "compare": false, 00:10:31.865 "compare_and_write": false, 00:10:31.865 "abort": true, 00:10:31.865 "seek_hole": false, 00:10:31.865 "seek_data": false, 00:10:31.865 "copy": true, 00:10:31.865 "nvme_iov_md": false 00:10:31.865 }, 00:10:31.865 "memory_domains": [ 00:10:31.865 { 00:10:31.865 "dma_device_id": "system", 00:10:31.865 "dma_device_type": 1 00:10:31.865 }, 00:10:31.865 { 00:10:31.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.865 "dma_device_type": 2 00:10:31.865 } 00:10:31.865 ], 00:10:31.865 "driver_specific": {} 00:10:31.865 } 00:10:31.865 ] 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.865 [2024-12-07 16:36:30.608137] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.865 [2024-12-07 16:36:30.608222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.865 [2024-12-07 16:36:30.608266] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.865 [2024-12-07 16:36:30.610419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.865 [2024-12-07 16:36:30.610506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.865 "name": "Existed_Raid", 00:10:31.865 "uuid": "4988c4aa-864c-4f51-9578-d7cc2fed6557", 00:10:31.865 "strip_size_kb": 64, 00:10:31.865 "state": "configuring", 00:10:31.865 "raid_level": "concat", 00:10:31.865 "superblock": true, 00:10:31.865 "num_base_bdevs": 4, 00:10:31.865 "num_base_bdevs_discovered": 3, 00:10:31.865 "num_base_bdevs_operational": 4, 00:10:31.865 "base_bdevs_list": [ 00:10:31.865 { 00:10:31.865 "name": "BaseBdev1", 00:10:31.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.865 "is_configured": false, 00:10:31.865 "data_offset": 0, 00:10:31.865 "data_size": 0 00:10:31.865 }, 00:10:31.865 { 00:10:31.865 "name": "BaseBdev2", 00:10:31.865 "uuid": "6aa35656-5ecc-4090-9398-7da8193e44e0", 00:10:31.865 "is_configured": true, 00:10:31.865 "data_offset": 2048, 00:10:31.865 "data_size": 63488 00:10:31.865 }, 00:10:31.865 { 00:10:31.865 "name": "BaseBdev3", 00:10:31.865 "uuid": "f4652297-b6ef-4d22-88dd-4fa07eab5d6a", 00:10:31.865 "is_configured": true, 00:10:31.865 "data_offset": 2048, 00:10:31.865 "data_size": 63488 00:10:31.865 }, 00:10:31.865 { 00:10:31.865 "name": "BaseBdev4", 00:10:31.865 "uuid": "6ff2bdf6-cef2-43bd-bd4f-b079f759c107", 00:10:31.865 "is_configured": true, 00:10:31.865 "data_offset": 2048, 00:10:31.865 "data_size": 63488 00:10:31.865 } 00:10:31.865 ] 00:10:31.865 }' 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.865 16:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.446 [2024-12-07 16:36:31.083345] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.446 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.446 "name": "Existed_Raid", 00:10:32.446 "uuid": "4988c4aa-864c-4f51-9578-d7cc2fed6557", 00:10:32.446 "strip_size_kb": 64, 00:10:32.446 "state": "configuring", 00:10:32.446 "raid_level": "concat", 00:10:32.446 "superblock": true, 00:10:32.446 "num_base_bdevs": 4, 00:10:32.446 "num_base_bdevs_discovered": 2, 00:10:32.446 "num_base_bdevs_operational": 4, 00:10:32.446 "base_bdevs_list": [ 00:10:32.446 { 00:10:32.446 "name": "BaseBdev1", 00:10:32.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.446 "is_configured": false, 00:10:32.446 "data_offset": 0, 00:10:32.446 "data_size": 0 00:10:32.446 }, 00:10:32.446 { 00:10:32.446 "name": null, 00:10:32.446 "uuid": "6aa35656-5ecc-4090-9398-7da8193e44e0", 00:10:32.446 "is_configured": false, 00:10:32.446 "data_offset": 0, 00:10:32.446 "data_size": 63488 00:10:32.446 }, 00:10:32.446 { 00:10:32.446 "name": "BaseBdev3", 00:10:32.446 "uuid": "f4652297-b6ef-4d22-88dd-4fa07eab5d6a", 00:10:32.446 "is_configured": true, 00:10:32.446 "data_offset": 2048, 00:10:32.447 "data_size": 63488 00:10:32.447 }, 00:10:32.447 { 00:10:32.447 "name": "BaseBdev4", 00:10:32.447 "uuid": "6ff2bdf6-cef2-43bd-bd4f-b079f759c107", 00:10:32.447 "is_configured": true, 00:10:32.447 "data_offset": 2048, 00:10:32.447 "data_size": 63488 00:10:32.447 } 00:10:32.447 ] 00:10:32.447 }' 00:10:32.447 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.447 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.707 [2024-12-07 16:36:31.523487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.707 BaseBdev1 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.707 [ 00:10:32.707 { 00:10:32.707 "name": "BaseBdev1", 00:10:32.707 "aliases": [ 00:10:32.707 "bf2c41ae-ef06-473f-a7be-83776fc49417" 00:10:32.707 ], 00:10:32.707 "product_name": "Malloc disk", 00:10:32.707 "block_size": 512, 00:10:32.707 "num_blocks": 65536, 00:10:32.707 "uuid": "bf2c41ae-ef06-473f-a7be-83776fc49417", 00:10:32.707 "assigned_rate_limits": { 00:10:32.707 "rw_ios_per_sec": 0, 00:10:32.707 "rw_mbytes_per_sec": 0, 00:10:32.707 "r_mbytes_per_sec": 0, 00:10:32.707 "w_mbytes_per_sec": 0 00:10:32.707 }, 00:10:32.707 "claimed": true, 00:10:32.707 "claim_type": "exclusive_write", 00:10:32.707 "zoned": false, 00:10:32.707 "supported_io_types": { 00:10:32.707 "read": true, 00:10:32.707 "write": true, 00:10:32.707 "unmap": true, 00:10:32.707 "flush": true, 00:10:32.707 "reset": true, 00:10:32.707 "nvme_admin": false, 00:10:32.707 "nvme_io": false, 00:10:32.707 "nvme_io_md": false, 00:10:32.707 "write_zeroes": true, 00:10:32.707 "zcopy": true, 00:10:32.707 "get_zone_info": false, 00:10:32.707 "zone_management": false, 00:10:32.707 "zone_append": false, 00:10:32.707 "compare": false, 00:10:32.707 "compare_and_write": false, 00:10:32.707 "abort": true, 00:10:32.707 "seek_hole": false, 00:10:32.707 "seek_data": false, 00:10:32.707 "copy": true, 00:10:32.707 "nvme_iov_md": false 00:10:32.707 }, 00:10:32.707 "memory_domains": [ 00:10:32.707 { 00:10:32.707 "dma_device_id": "system", 00:10:32.707 "dma_device_type": 1 00:10:32.707 }, 00:10:32.707 { 00:10:32.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.707 "dma_device_type": 2 00:10:32.707 } 00:10:32.707 ], 00:10:32.707 "driver_specific": {} 00:10:32.707 } 00:10:32.707 ] 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.707 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.708 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.708 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.708 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.708 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.708 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.708 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.708 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.708 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.708 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.708 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.708 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.708 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.968 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.968 "name": "Existed_Raid", 00:10:32.968 "uuid": "4988c4aa-864c-4f51-9578-d7cc2fed6557", 00:10:32.968 "strip_size_kb": 64, 00:10:32.968 "state": "configuring", 00:10:32.968 "raid_level": "concat", 00:10:32.968 "superblock": true, 00:10:32.968 "num_base_bdevs": 4, 00:10:32.968 "num_base_bdevs_discovered": 3, 00:10:32.968 "num_base_bdevs_operational": 4, 00:10:32.968 "base_bdevs_list": [ 00:10:32.968 { 00:10:32.968 "name": "BaseBdev1", 00:10:32.968 "uuid": "bf2c41ae-ef06-473f-a7be-83776fc49417", 00:10:32.968 "is_configured": true, 00:10:32.968 "data_offset": 2048, 00:10:32.968 "data_size": 63488 00:10:32.968 }, 00:10:32.968 { 00:10:32.968 "name": null, 00:10:32.968 "uuid": "6aa35656-5ecc-4090-9398-7da8193e44e0", 00:10:32.968 "is_configured": false, 00:10:32.968 "data_offset": 0, 00:10:32.968 "data_size": 63488 00:10:32.968 }, 00:10:32.968 { 00:10:32.968 "name": "BaseBdev3", 00:10:32.968 "uuid": "f4652297-b6ef-4d22-88dd-4fa07eab5d6a", 00:10:32.968 "is_configured": true, 00:10:32.968 "data_offset": 2048, 00:10:32.968 "data_size": 63488 00:10:32.968 }, 00:10:32.968 { 00:10:32.968 "name": "BaseBdev4", 00:10:32.968 "uuid": "6ff2bdf6-cef2-43bd-bd4f-b079f759c107", 00:10:32.968 "is_configured": true, 00:10:32.968 "data_offset": 2048, 00:10:32.968 "data_size": 63488 00:10:32.968 } 00:10:32.968 ] 00:10:32.968 }' 00:10:32.968 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.968 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.227 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:33.227 16:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.228 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.228 16:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.228 [2024-12-07 16:36:32.018750] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.228 "name": "Existed_Raid", 00:10:33.228 "uuid": "4988c4aa-864c-4f51-9578-d7cc2fed6557", 00:10:33.228 "strip_size_kb": 64, 00:10:33.228 "state": "configuring", 00:10:33.228 "raid_level": "concat", 00:10:33.228 "superblock": true, 00:10:33.228 "num_base_bdevs": 4, 00:10:33.228 "num_base_bdevs_discovered": 2, 00:10:33.228 "num_base_bdevs_operational": 4, 00:10:33.228 "base_bdevs_list": [ 00:10:33.228 { 00:10:33.228 "name": "BaseBdev1", 00:10:33.228 "uuid": "bf2c41ae-ef06-473f-a7be-83776fc49417", 00:10:33.228 "is_configured": true, 00:10:33.228 "data_offset": 2048, 00:10:33.228 "data_size": 63488 00:10:33.228 }, 00:10:33.228 { 00:10:33.228 "name": null, 00:10:33.228 "uuid": "6aa35656-5ecc-4090-9398-7da8193e44e0", 00:10:33.228 "is_configured": false, 00:10:33.228 "data_offset": 0, 00:10:33.228 "data_size": 63488 00:10:33.228 }, 00:10:33.228 { 00:10:33.228 "name": null, 00:10:33.228 "uuid": "f4652297-b6ef-4d22-88dd-4fa07eab5d6a", 00:10:33.228 "is_configured": false, 00:10:33.228 "data_offset": 0, 00:10:33.228 "data_size": 63488 00:10:33.228 }, 00:10:33.228 { 00:10:33.228 "name": "BaseBdev4", 00:10:33.228 "uuid": "6ff2bdf6-cef2-43bd-bd4f-b079f759c107", 00:10:33.228 "is_configured": true, 00:10:33.228 "data_offset": 2048, 00:10:33.228 "data_size": 63488 00:10:33.228 } 00:10:33.228 ] 00:10:33.228 }' 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.228 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.796 [2024-12-07 16:36:32.486056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.796 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.796 "name": "Existed_Raid", 00:10:33.796 "uuid": "4988c4aa-864c-4f51-9578-d7cc2fed6557", 00:10:33.796 "strip_size_kb": 64, 00:10:33.796 "state": "configuring", 00:10:33.796 "raid_level": "concat", 00:10:33.796 "superblock": true, 00:10:33.796 "num_base_bdevs": 4, 00:10:33.796 "num_base_bdevs_discovered": 3, 00:10:33.796 "num_base_bdevs_operational": 4, 00:10:33.796 "base_bdevs_list": [ 00:10:33.796 { 00:10:33.796 "name": "BaseBdev1", 00:10:33.796 "uuid": "bf2c41ae-ef06-473f-a7be-83776fc49417", 00:10:33.796 "is_configured": true, 00:10:33.796 "data_offset": 2048, 00:10:33.796 "data_size": 63488 00:10:33.796 }, 00:10:33.796 { 00:10:33.796 "name": null, 00:10:33.796 "uuid": "6aa35656-5ecc-4090-9398-7da8193e44e0", 00:10:33.796 "is_configured": false, 00:10:33.796 "data_offset": 0, 00:10:33.797 "data_size": 63488 00:10:33.797 }, 00:10:33.797 { 00:10:33.797 "name": "BaseBdev3", 00:10:33.797 "uuid": "f4652297-b6ef-4d22-88dd-4fa07eab5d6a", 00:10:33.797 "is_configured": true, 00:10:33.797 "data_offset": 2048, 00:10:33.797 "data_size": 63488 00:10:33.797 }, 00:10:33.797 { 00:10:33.797 "name": "BaseBdev4", 00:10:33.797 "uuid": "6ff2bdf6-cef2-43bd-bd4f-b079f759c107", 00:10:33.797 "is_configured": true, 00:10:33.797 "data_offset": 2048, 00:10:33.797 "data_size": 63488 00:10:33.797 } 00:10:33.797 ] 00:10:33.797 }' 00:10:33.797 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.797 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.056 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:34.056 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.056 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.056 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.056 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.315 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:34.315 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:34.315 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.315 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.315 [2024-12-07 16:36:32.965204] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.315 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.315 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.315 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.315 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.315 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.315 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.315 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.315 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.315 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.316 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.316 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.316 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.316 16:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.316 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.316 16:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.316 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.316 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.316 "name": "Existed_Raid", 00:10:34.316 "uuid": "4988c4aa-864c-4f51-9578-d7cc2fed6557", 00:10:34.316 "strip_size_kb": 64, 00:10:34.316 "state": "configuring", 00:10:34.316 "raid_level": "concat", 00:10:34.316 "superblock": true, 00:10:34.316 "num_base_bdevs": 4, 00:10:34.316 "num_base_bdevs_discovered": 2, 00:10:34.316 "num_base_bdevs_operational": 4, 00:10:34.316 "base_bdevs_list": [ 00:10:34.316 { 00:10:34.316 "name": null, 00:10:34.316 "uuid": "bf2c41ae-ef06-473f-a7be-83776fc49417", 00:10:34.316 "is_configured": false, 00:10:34.316 "data_offset": 0, 00:10:34.316 "data_size": 63488 00:10:34.316 }, 00:10:34.316 { 00:10:34.316 "name": null, 00:10:34.316 "uuid": "6aa35656-5ecc-4090-9398-7da8193e44e0", 00:10:34.316 "is_configured": false, 00:10:34.316 "data_offset": 0, 00:10:34.316 "data_size": 63488 00:10:34.316 }, 00:10:34.316 { 00:10:34.316 "name": "BaseBdev3", 00:10:34.316 "uuid": "f4652297-b6ef-4d22-88dd-4fa07eab5d6a", 00:10:34.316 "is_configured": true, 00:10:34.316 "data_offset": 2048, 00:10:34.316 "data_size": 63488 00:10:34.316 }, 00:10:34.316 { 00:10:34.316 "name": "BaseBdev4", 00:10:34.316 "uuid": "6ff2bdf6-cef2-43bd-bd4f-b079f759c107", 00:10:34.316 "is_configured": true, 00:10:34.316 "data_offset": 2048, 00:10:34.316 "data_size": 63488 00:10:34.316 } 00:10:34.316 ] 00:10:34.316 }' 00:10:34.316 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.316 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.575 [2024-12-07 16:36:33.432399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.575 "name": "Existed_Raid", 00:10:34.575 "uuid": "4988c4aa-864c-4f51-9578-d7cc2fed6557", 00:10:34.575 "strip_size_kb": 64, 00:10:34.575 "state": "configuring", 00:10:34.575 "raid_level": "concat", 00:10:34.575 "superblock": true, 00:10:34.575 "num_base_bdevs": 4, 00:10:34.575 "num_base_bdevs_discovered": 3, 00:10:34.575 "num_base_bdevs_operational": 4, 00:10:34.575 "base_bdevs_list": [ 00:10:34.575 { 00:10:34.575 "name": null, 00:10:34.575 "uuid": "bf2c41ae-ef06-473f-a7be-83776fc49417", 00:10:34.575 "is_configured": false, 00:10:34.575 "data_offset": 0, 00:10:34.575 "data_size": 63488 00:10:34.575 }, 00:10:34.575 { 00:10:34.575 "name": "BaseBdev2", 00:10:34.575 "uuid": "6aa35656-5ecc-4090-9398-7da8193e44e0", 00:10:34.575 "is_configured": true, 00:10:34.575 "data_offset": 2048, 00:10:34.575 "data_size": 63488 00:10:34.575 }, 00:10:34.575 { 00:10:34.575 "name": "BaseBdev3", 00:10:34.575 "uuid": "f4652297-b6ef-4d22-88dd-4fa07eab5d6a", 00:10:34.575 "is_configured": true, 00:10:34.575 "data_offset": 2048, 00:10:34.575 "data_size": 63488 00:10:34.575 }, 00:10:34.575 { 00:10:34.575 "name": "BaseBdev4", 00:10:34.575 "uuid": "6ff2bdf6-cef2-43bd-bd4f-b079f759c107", 00:10:34.575 "is_configured": true, 00:10:34.575 "data_offset": 2048, 00:10:34.575 "data_size": 63488 00:10:34.575 } 00:10:34.575 ] 00:10:34.575 }' 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.575 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bf2c41ae-ef06-473f-a7be-83776fc49417 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.146 [2024-12-07 16:36:33.948501] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:35.146 [2024-12-07 16:36:33.948804] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:35.146 [2024-12-07 16:36:33.948853] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:35.146 [2024-12-07 16:36:33.949180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:35.146 [2024-12-07 16:36:33.949354] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:35.146 [2024-12-07 16:36:33.949402] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:35.146 NewBaseBdev 00:10:35.146 [2024-12-07 16:36:33.949544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.146 [ 00:10:35.146 { 00:10:35.146 "name": "NewBaseBdev", 00:10:35.146 "aliases": [ 00:10:35.146 "bf2c41ae-ef06-473f-a7be-83776fc49417" 00:10:35.146 ], 00:10:35.146 "product_name": "Malloc disk", 00:10:35.146 "block_size": 512, 00:10:35.146 "num_blocks": 65536, 00:10:35.146 "uuid": "bf2c41ae-ef06-473f-a7be-83776fc49417", 00:10:35.146 "assigned_rate_limits": { 00:10:35.146 "rw_ios_per_sec": 0, 00:10:35.146 "rw_mbytes_per_sec": 0, 00:10:35.146 "r_mbytes_per_sec": 0, 00:10:35.146 "w_mbytes_per_sec": 0 00:10:35.146 }, 00:10:35.146 "claimed": true, 00:10:35.146 "claim_type": "exclusive_write", 00:10:35.146 "zoned": false, 00:10:35.146 "supported_io_types": { 00:10:35.146 "read": true, 00:10:35.146 "write": true, 00:10:35.146 "unmap": true, 00:10:35.146 "flush": true, 00:10:35.146 "reset": true, 00:10:35.146 "nvme_admin": false, 00:10:35.146 "nvme_io": false, 00:10:35.146 "nvme_io_md": false, 00:10:35.146 "write_zeroes": true, 00:10:35.146 "zcopy": true, 00:10:35.146 "get_zone_info": false, 00:10:35.146 "zone_management": false, 00:10:35.146 "zone_append": false, 00:10:35.146 "compare": false, 00:10:35.146 "compare_and_write": false, 00:10:35.146 "abort": true, 00:10:35.146 "seek_hole": false, 00:10:35.146 "seek_data": false, 00:10:35.146 "copy": true, 00:10:35.146 "nvme_iov_md": false 00:10:35.146 }, 00:10:35.146 "memory_domains": [ 00:10:35.146 { 00:10:35.146 "dma_device_id": "system", 00:10:35.146 "dma_device_type": 1 00:10:35.146 }, 00:10:35.146 { 00:10:35.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.146 "dma_device_type": 2 00:10:35.146 } 00:10:35.146 ], 00:10:35.146 "driver_specific": {} 00:10:35.146 } 00:10:35.146 ] 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.146 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.147 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.147 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.147 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.147 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.147 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.147 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.147 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.147 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.147 16:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.147 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.147 16:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.147 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.147 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.147 "name": "Existed_Raid", 00:10:35.147 "uuid": "4988c4aa-864c-4f51-9578-d7cc2fed6557", 00:10:35.147 "strip_size_kb": 64, 00:10:35.147 "state": "online", 00:10:35.147 "raid_level": "concat", 00:10:35.147 "superblock": true, 00:10:35.147 "num_base_bdevs": 4, 00:10:35.147 "num_base_bdevs_discovered": 4, 00:10:35.147 "num_base_bdevs_operational": 4, 00:10:35.147 "base_bdevs_list": [ 00:10:35.147 { 00:10:35.147 "name": "NewBaseBdev", 00:10:35.147 "uuid": "bf2c41ae-ef06-473f-a7be-83776fc49417", 00:10:35.147 "is_configured": true, 00:10:35.147 "data_offset": 2048, 00:10:35.147 "data_size": 63488 00:10:35.147 }, 00:10:35.147 { 00:10:35.147 "name": "BaseBdev2", 00:10:35.147 "uuid": "6aa35656-5ecc-4090-9398-7da8193e44e0", 00:10:35.147 "is_configured": true, 00:10:35.147 "data_offset": 2048, 00:10:35.147 "data_size": 63488 00:10:35.147 }, 00:10:35.147 { 00:10:35.147 "name": "BaseBdev3", 00:10:35.147 "uuid": "f4652297-b6ef-4d22-88dd-4fa07eab5d6a", 00:10:35.147 "is_configured": true, 00:10:35.147 "data_offset": 2048, 00:10:35.147 "data_size": 63488 00:10:35.147 }, 00:10:35.147 { 00:10:35.147 "name": "BaseBdev4", 00:10:35.147 "uuid": "6ff2bdf6-cef2-43bd-bd4f-b079f759c107", 00:10:35.147 "is_configured": true, 00:10:35.147 "data_offset": 2048, 00:10:35.147 "data_size": 63488 00:10:35.147 } 00:10:35.147 ] 00:10:35.147 }' 00:10:35.147 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.147 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.715 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:35.715 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:35.715 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:35.715 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:35.715 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:35.715 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:35.715 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:35.715 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:35.715 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.715 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.715 [2024-12-07 16:36:34.428134] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.715 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.715 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:35.715 "name": "Existed_Raid", 00:10:35.715 "aliases": [ 00:10:35.715 "4988c4aa-864c-4f51-9578-d7cc2fed6557" 00:10:35.715 ], 00:10:35.715 "product_name": "Raid Volume", 00:10:35.715 "block_size": 512, 00:10:35.715 "num_blocks": 253952, 00:10:35.715 "uuid": "4988c4aa-864c-4f51-9578-d7cc2fed6557", 00:10:35.715 "assigned_rate_limits": { 00:10:35.715 "rw_ios_per_sec": 0, 00:10:35.715 "rw_mbytes_per_sec": 0, 00:10:35.715 "r_mbytes_per_sec": 0, 00:10:35.715 "w_mbytes_per_sec": 0 00:10:35.715 }, 00:10:35.715 "claimed": false, 00:10:35.715 "zoned": false, 00:10:35.715 "supported_io_types": { 00:10:35.715 "read": true, 00:10:35.715 "write": true, 00:10:35.715 "unmap": true, 00:10:35.715 "flush": true, 00:10:35.715 "reset": true, 00:10:35.715 "nvme_admin": false, 00:10:35.715 "nvme_io": false, 00:10:35.715 "nvme_io_md": false, 00:10:35.715 "write_zeroes": true, 00:10:35.715 "zcopy": false, 00:10:35.715 "get_zone_info": false, 00:10:35.715 "zone_management": false, 00:10:35.715 "zone_append": false, 00:10:35.715 "compare": false, 00:10:35.715 "compare_and_write": false, 00:10:35.715 "abort": false, 00:10:35.715 "seek_hole": false, 00:10:35.715 "seek_data": false, 00:10:35.715 "copy": false, 00:10:35.715 "nvme_iov_md": false 00:10:35.715 }, 00:10:35.715 "memory_domains": [ 00:10:35.715 { 00:10:35.715 "dma_device_id": "system", 00:10:35.715 "dma_device_type": 1 00:10:35.715 }, 00:10:35.715 { 00:10:35.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.715 "dma_device_type": 2 00:10:35.715 }, 00:10:35.715 { 00:10:35.715 "dma_device_id": "system", 00:10:35.715 "dma_device_type": 1 00:10:35.715 }, 00:10:35.715 { 00:10:35.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.716 "dma_device_type": 2 00:10:35.716 }, 00:10:35.716 { 00:10:35.716 "dma_device_id": "system", 00:10:35.716 "dma_device_type": 1 00:10:35.716 }, 00:10:35.716 { 00:10:35.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.716 "dma_device_type": 2 00:10:35.716 }, 00:10:35.716 { 00:10:35.716 "dma_device_id": "system", 00:10:35.716 "dma_device_type": 1 00:10:35.716 }, 00:10:35.716 { 00:10:35.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.716 "dma_device_type": 2 00:10:35.716 } 00:10:35.716 ], 00:10:35.716 "driver_specific": { 00:10:35.716 "raid": { 00:10:35.716 "uuid": "4988c4aa-864c-4f51-9578-d7cc2fed6557", 00:10:35.716 "strip_size_kb": 64, 00:10:35.716 "state": "online", 00:10:35.716 "raid_level": "concat", 00:10:35.716 "superblock": true, 00:10:35.716 "num_base_bdevs": 4, 00:10:35.716 "num_base_bdevs_discovered": 4, 00:10:35.716 "num_base_bdevs_operational": 4, 00:10:35.716 "base_bdevs_list": [ 00:10:35.716 { 00:10:35.716 "name": "NewBaseBdev", 00:10:35.716 "uuid": "bf2c41ae-ef06-473f-a7be-83776fc49417", 00:10:35.716 "is_configured": true, 00:10:35.716 "data_offset": 2048, 00:10:35.716 "data_size": 63488 00:10:35.716 }, 00:10:35.716 { 00:10:35.716 "name": "BaseBdev2", 00:10:35.716 "uuid": "6aa35656-5ecc-4090-9398-7da8193e44e0", 00:10:35.716 "is_configured": true, 00:10:35.716 "data_offset": 2048, 00:10:35.716 "data_size": 63488 00:10:35.716 }, 00:10:35.716 { 00:10:35.716 "name": "BaseBdev3", 00:10:35.716 "uuid": "f4652297-b6ef-4d22-88dd-4fa07eab5d6a", 00:10:35.716 "is_configured": true, 00:10:35.716 "data_offset": 2048, 00:10:35.716 "data_size": 63488 00:10:35.716 }, 00:10:35.716 { 00:10:35.716 "name": "BaseBdev4", 00:10:35.716 "uuid": "6ff2bdf6-cef2-43bd-bd4f-b079f759c107", 00:10:35.716 "is_configured": true, 00:10:35.716 "data_offset": 2048, 00:10:35.716 "data_size": 63488 00:10:35.716 } 00:10:35.716 ] 00:10:35.716 } 00:10:35.716 } 00:10:35.716 }' 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:35.716 BaseBdev2 00:10:35.716 BaseBdev3 00:10:35.716 BaseBdev4' 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.716 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.976 [2024-12-07 16:36:34.699309] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.976 [2024-12-07 16:36:34.699414] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.976 [2024-12-07 16:36:34.699553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.976 [2024-12-07 16:36:34.699656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.976 [2024-12-07 16:36:34.699704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83125 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83125 ']' 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83125 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83125 00:10:35.976 killing process with pid 83125 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83125' 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83125 00:10:35.976 [2024-12-07 16:36:34.741659] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.976 16:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83125 00:10:35.976 [2024-12-07 16:36:34.823224] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.545 16:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:36.545 00:10:36.545 real 0m9.708s 00:10:36.545 user 0m16.179s 00:10:36.545 sys 0m2.147s 00:10:36.545 ************************************ 00:10:36.545 END TEST raid_state_function_test_sb 00:10:36.545 ************************************ 00:10:36.545 16:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.545 16:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.545 16:36:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:36.545 16:36:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:36.545 16:36:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.545 16:36:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.545 ************************************ 00:10:36.545 START TEST raid_superblock_test 00:10:36.545 ************************************ 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83779 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83779 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83779 ']' 00:10:36.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.545 16:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.545 [2024-12-07 16:36:35.373353] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:36.545 [2024-12-07 16:36:35.373573] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83779 ] 00:10:36.805 [2024-12-07 16:36:35.534256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.805 [2024-12-07 16:36:35.611515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.805 [2024-12-07 16:36:35.692261] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.805 [2024-12-07 16:36:35.692313] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.374 malloc1 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.374 [2024-12-07 16:36:36.241079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:37.374 [2024-12-07 16:36:36.241201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.374 [2024-12-07 16:36:36.241240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:37.374 [2024-12-07 16:36:36.241286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.374 [2024-12-07 16:36:36.243840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.374 [2024-12-07 16:36:36.243914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:37.374 pt1 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.374 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.634 malloc2 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.634 [2024-12-07 16:36:36.284569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:37.634 [2024-12-07 16:36:36.284644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.634 [2024-12-07 16:36:36.284666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:37.634 [2024-12-07 16:36:36.284682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.634 [2024-12-07 16:36:36.287489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.634 [2024-12-07 16:36:36.287525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:37.634 pt2 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.634 malloc3 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.634 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.634 [2024-12-07 16:36:36.319475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:37.635 [2024-12-07 16:36:36.319572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.635 [2024-12-07 16:36:36.319606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:37.635 [2024-12-07 16:36:36.319637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.635 [2024-12-07 16:36:36.322028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.635 [2024-12-07 16:36:36.322095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:37.635 pt3 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.635 malloc4 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.635 [2024-12-07 16:36:36.346210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:37.635 [2024-12-07 16:36:36.346259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.635 [2024-12-07 16:36:36.346276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:37.635 [2024-12-07 16:36:36.346290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.635 [2024-12-07 16:36:36.348687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.635 [2024-12-07 16:36:36.348763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:37.635 pt4 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.635 [2024-12-07 16:36:36.358278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:37.635 [2024-12-07 16:36:36.360446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:37.635 [2024-12-07 16:36:36.360540] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:37.635 [2024-12-07 16:36:36.360623] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:37.635 [2024-12-07 16:36:36.360818] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:37.635 [2024-12-07 16:36:36.360868] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:37.635 [2024-12-07 16:36:36.361153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:37.635 [2024-12-07 16:36:36.361329] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:37.635 [2024-12-07 16:36:36.361383] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:37.635 [2024-12-07 16:36:36.361546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.635 "name": "raid_bdev1", 00:10:37.635 "uuid": "92fb33dd-0447-40af-a93e-876bda5e5369", 00:10:37.635 "strip_size_kb": 64, 00:10:37.635 "state": "online", 00:10:37.635 "raid_level": "concat", 00:10:37.635 "superblock": true, 00:10:37.635 "num_base_bdevs": 4, 00:10:37.635 "num_base_bdevs_discovered": 4, 00:10:37.635 "num_base_bdevs_operational": 4, 00:10:37.635 "base_bdevs_list": [ 00:10:37.635 { 00:10:37.635 "name": "pt1", 00:10:37.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.635 "is_configured": true, 00:10:37.635 "data_offset": 2048, 00:10:37.635 "data_size": 63488 00:10:37.635 }, 00:10:37.635 { 00:10:37.635 "name": "pt2", 00:10:37.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.635 "is_configured": true, 00:10:37.635 "data_offset": 2048, 00:10:37.635 "data_size": 63488 00:10:37.635 }, 00:10:37.635 { 00:10:37.635 "name": "pt3", 00:10:37.635 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.635 "is_configured": true, 00:10:37.635 "data_offset": 2048, 00:10:37.635 "data_size": 63488 00:10:37.635 }, 00:10:37.635 { 00:10:37.635 "name": "pt4", 00:10:37.635 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:37.635 "is_configured": true, 00:10:37.635 "data_offset": 2048, 00:10:37.635 "data_size": 63488 00:10:37.635 } 00:10:37.635 ] 00:10:37.635 }' 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.635 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.895 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:37.895 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:37.895 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.895 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.895 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.895 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.895 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.895 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.895 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.895 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.895 [2024-12-07 16:36:36.789944] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.155 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.155 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.155 "name": "raid_bdev1", 00:10:38.155 "aliases": [ 00:10:38.155 "92fb33dd-0447-40af-a93e-876bda5e5369" 00:10:38.155 ], 00:10:38.155 "product_name": "Raid Volume", 00:10:38.155 "block_size": 512, 00:10:38.155 "num_blocks": 253952, 00:10:38.155 "uuid": "92fb33dd-0447-40af-a93e-876bda5e5369", 00:10:38.155 "assigned_rate_limits": { 00:10:38.155 "rw_ios_per_sec": 0, 00:10:38.155 "rw_mbytes_per_sec": 0, 00:10:38.155 "r_mbytes_per_sec": 0, 00:10:38.155 "w_mbytes_per_sec": 0 00:10:38.156 }, 00:10:38.156 "claimed": false, 00:10:38.156 "zoned": false, 00:10:38.156 "supported_io_types": { 00:10:38.156 "read": true, 00:10:38.156 "write": true, 00:10:38.156 "unmap": true, 00:10:38.156 "flush": true, 00:10:38.156 "reset": true, 00:10:38.156 "nvme_admin": false, 00:10:38.156 "nvme_io": false, 00:10:38.156 "nvme_io_md": false, 00:10:38.156 "write_zeroes": true, 00:10:38.156 "zcopy": false, 00:10:38.156 "get_zone_info": false, 00:10:38.156 "zone_management": false, 00:10:38.156 "zone_append": false, 00:10:38.156 "compare": false, 00:10:38.156 "compare_and_write": false, 00:10:38.156 "abort": false, 00:10:38.156 "seek_hole": false, 00:10:38.156 "seek_data": false, 00:10:38.156 "copy": false, 00:10:38.156 "nvme_iov_md": false 00:10:38.156 }, 00:10:38.156 "memory_domains": [ 00:10:38.156 { 00:10:38.156 "dma_device_id": "system", 00:10:38.156 "dma_device_type": 1 00:10:38.156 }, 00:10:38.156 { 00:10:38.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.156 "dma_device_type": 2 00:10:38.156 }, 00:10:38.156 { 00:10:38.156 "dma_device_id": "system", 00:10:38.156 "dma_device_type": 1 00:10:38.156 }, 00:10:38.156 { 00:10:38.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.156 "dma_device_type": 2 00:10:38.156 }, 00:10:38.156 { 00:10:38.156 "dma_device_id": "system", 00:10:38.156 "dma_device_type": 1 00:10:38.156 }, 00:10:38.156 { 00:10:38.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.156 "dma_device_type": 2 00:10:38.156 }, 00:10:38.156 { 00:10:38.156 "dma_device_id": "system", 00:10:38.156 "dma_device_type": 1 00:10:38.156 }, 00:10:38.156 { 00:10:38.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.156 "dma_device_type": 2 00:10:38.156 } 00:10:38.156 ], 00:10:38.156 "driver_specific": { 00:10:38.156 "raid": { 00:10:38.156 "uuid": "92fb33dd-0447-40af-a93e-876bda5e5369", 00:10:38.156 "strip_size_kb": 64, 00:10:38.156 "state": "online", 00:10:38.156 "raid_level": "concat", 00:10:38.156 "superblock": true, 00:10:38.156 "num_base_bdevs": 4, 00:10:38.156 "num_base_bdevs_discovered": 4, 00:10:38.156 "num_base_bdevs_operational": 4, 00:10:38.156 "base_bdevs_list": [ 00:10:38.156 { 00:10:38.156 "name": "pt1", 00:10:38.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.156 "is_configured": true, 00:10:38.156 "data_offset": 2048, 00:10:38.156 "data_size": 63488 00:10:38.156 }, 00:10:38.156 { 00:10:38.156 "name": "pt2", 00:10:38.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.156 "is_configured": true, 00:10:38.156 "data_offset": 2048, 00:10:38.156 "data_size": 63488 00:10:38.156 }, 00:10:38.156 { 00:10:38.156 "name": "pt3", 00:10:38.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.156 "is_configured": true, 00:10:38.156 "data_offset": 2048, 00:10:38.156 "data_size": 63488 00:10:38.156 }, 00:10:38.156 { 00:10:38.156 "name": "pt4", 00:10:38.156 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:38.156 "is_configured": true, 00:10:38.156 "data_offset": 2048, 00:10:38.156 "data_size": 63488 00:10:38.156 } 00:10:38.156 ] 00:10:38.156 } 00:10:38.156 } 00:10:38.156 }' 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:38.156 pt2 00:10:38.156 pt3 00:10:38.156 pt4' 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.156 16:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.156 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.156 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.156 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.156 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:38.156 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.156 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.156 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.156 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.416 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.416 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.416 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.416 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.417 [2024-12-07 16:36:37.129228] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=92fb33dd-0447-40af-a93e-876bda5e5369 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 92fb33dd-0447-40af-a93e-876bda5e5369 ']' 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.417 [2024-12-07 16:36:37.180812] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.417 [2024-12-07 16:36:37.180888] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.417 [2024-12-07 16:36:37.180999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.417 [2024-12-07 16:36:37.181099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.417 [2024-12-07 16:36:37.181148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:38.417 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.676 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.677 [2024-12-07 16:36:37.336592] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:38.677 [2024-12-07 16:36:37.338852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:38.677 [2024-12-07 16:36:37.338968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:38.677 [2024-12-07 16:36:37.339019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:38.677 [2024-12-07 16:36:37.339101] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:38.677 [2024-12-07 16:36:37.339176] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:38.677 [2024-12-07 16:36:37.339219] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:38.677 [2024-12-07 16:36:37.339237] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:38.677 [2024-12-07 16:36:37.339252] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.677 [2024-12-07 16:36:37.339262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:38.677 request: 00:10:38.677 { 00:10:38.677 "name": "raid_bdev1", 00:10:38.677 "raid_level": "concat", 00:10:38.677 "base_bdevs": [ 00:10:38.677 "malloc1", 00:10:38.677 "malloc2", 00:10:38.677 "malloc3", 00:10:38.677 "malloc4" 00:10:38.677 ], 00:10:38.677 "strip_size_kb": 64, 00:10:38.677 "superblock": false, 00:10:38.677 "method": "bdev_raid_create", 00:10:38.677 "req_id": 1 00:10:38.677 } 00:10:38.677 Got JSON-RPC error response 00:10:38.677 response: 00:10:38.677 { 00:10:38.677 "code": -17, 00:10:38.677 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:38.677 } 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.677 [2024-12-07 16:36:37.384537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:38.677 [2024-12-07 16:36:37.384652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.677 [2024-12-07 16:36:37.384698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:38.677 [2024-12-07 16:36:37.384738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.677 [2024-12-07 16:36:37.387404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.677 [2024-12-07 16:36:37.387475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:38.677 [2024-12-07 16:36:37.387608] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:38.677 [2024-12-07 16:36:37.387689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:38.677 pt1 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.677 "name": "raid_bdev1", 00:10:38.677 "uuid": "92fb33dd-0447-40af-a93e-876bda5e5369", 00:10:38.677 "strip_size_kb": 64, 00:10:38.677 "state": "configuring", 00:10:38.677 "raid_level": "concat", 00:10:38.677 "superblock": true, 00:10:38.677 "num_base_bdevs": 4, 00:10:38.677 "num_base_bdevs_discovered": 1, 00:10:38.677 "num_base_bdevs_operational": 4, 00:10:38.677 "base_bdevs_list": [ 00:10:38.677 { 00:10:38.677 "name": "pt1", 00:10:38.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.677 "is_configured": true, 00:10:38.677 "data_offset": 2048, 00:10:38.677 "data_size": 63488 00:10:38.677 }, 00:10:38.677 { 00:10:38.677 "name": null, 00:10:38.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.677 "is_configured": false, 00:10:38.677 "data_offset": 2048, 00:10:38.677 "data_size": 63488 00:10:38.677 }, 00:10:38.677 { 00:10:38.677 "name": null, 00:10:38.677 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.677 "is_configured": false, 00:10:38.677 "data_offset": 2048, 00:10:38.677 "data_size": 63488 00:10:38.677 }, 00:10:38.677 { 00:10:38.677 "name": null, 00:10:38.677 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:38.677 "is_configured": false, 00:10:38.677 "data_offset": 2048, 00:10:38.677 "data_size": 63488 00:10:38.677 } 00:10:38.677 ] 00:10:38.677 }' 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.677 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.253 [2024-12-07 16:36:37.839754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:39.253 [2024-12-07 16:36:37.839833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.253 [2024-12-07 16:36:37.839859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:39.253 [2024-12-07 16:36:37.839869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.253 [2024-12-07 16:36:37.840403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.253 [2024-12-07 16:36:37.840424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:39.253 [2024-12-07 16:36:37.840522] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:39.253 [2024-12-07 16:36:37.840548] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:39.253 pt2 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.253 [2024-12-07 16:36:37.847756] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.253 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.254 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.254 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.254 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.254 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.254 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.254 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.254 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.254 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.254 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.254 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.254 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.254 "name": "raid_bdev1", 00:10:39.254 "uuid": "92fb33dd-0447-40af-a93e-876bda5e5369", 00:10:39.254 "strip_size_kb": 64, 00:10:39.254 "state": "configuring", 00:10:39.254 "raid_level": "concat", 00:10:39.254 "superblock": true, 00:10:39.254 "num_base_bdevs": 4, 00:10:39.254 "num_base_bdevs_discovered": 1, 00:10:39.254 "num_base_bdevs_operational": 4, 00:10:39.254 "base_bdevs_list": [ 00:10:39.254 { 00:10:39.254 "name": "pt1", 00:10:39.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.254 "is_configured": true, 00:10:39.254 "data_offset": 2048, 00:10:39.254 "data_size": 63488 00:10:39.254 }, 00:10:39.254 { 00:10:39.254 "name": null, 00:10:39.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.254 "is_configured": false, 00:10:39.254 "data_offset": 0, 00:10:39.254 "data_size": 63488 00:10:39.254 }, 00:10:39.254 { 00:10:39.254 "name": null, 00:10:39.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.254 "is_configured": false, 00:10:39.254 "data_offset": 2048, 00:10:39.254 "data_size": 63488 00:10:39.254 }, 00:10:39.254 { 00:10:39.254 "name": null, 00:10:39.254 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:39.254 "is_configured": false, 00:10:39.254 "data_offset": 2048, 00:10:39.254 "data_size": 63488 00:10:39.254 } 00:10:39.254 ] 00:10:39.254 }' 00:10:39.254 16:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.254 16:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.529 [2024-12-07 16:36:38.315149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:39.529 [2024-12-07 16:36:38.315282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.529 [2024-12-07 16:36:38.315323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:39.529 [2024-12-07 16:36:38.315373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.529 [2024-12-07 16:36:38.315895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.529 [2024-12-07 16:36:38.315955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:39.529 [2024-12-07 16:36:38.316073] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:39.529 [2024-12-07 16:36:38.316131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:39.529 pt2 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.529 [2024-12-07 16:36:38.327079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:39.529 [2024-12-07 16:36:38.327169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.529 [2024-12-07 16:36:38.327203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:39.529 [2024-12-07 16:36:38.327233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.529 [2024-12-07 16:36:38.327630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.529 [2024-12-07 16:36:38.327684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:39.529 [2024-12-07 16:36:38.327768] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:39.529 [2024-12-07 16:36:38.327814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:39.529 pt3 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.529 [2024-12-07 16:36:38.339050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:39.529 [2024-12-07 16:36:38.339135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.529 [2024-12-07 16:36:38.339165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:39.529 [2024-12-07 16:36:38.339191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.529 [2024-12-07 16:36:38.339541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.529 [2024-12-07 16:36:38.339595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:39.529 [2024-12-07 16:36:38.339670] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:39.529 [2024-12-07 16:36:38.339716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:39.529 [2024-12-07 16:36:38.339845] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:39.529 [2024-12-07 16:36:38.339888] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:39.529 [2024-12-07 16:36:38.340157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:39.529 [2024-12-07 16:36:38.340311] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:39.529 [2024-12-07 16:36:38.340324] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:39.529 [2024-12-07 16:36:38.340443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.529 pt4 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.529 "name": "raid_bdev1", 00:10:39.529 "uuid": "92fb33dd-0447-40af-a93e-876bda5e5369", 00:10:39.529 "strip_size_kb": 64, 00:10:39.529 "state": "online", 00:10:39.529 "raid_level": "concat", 00:10:39.529 "superblock": true, 00:10:39.529 "num_base_bdevs": 4, 00:10:39.529 "num_base_bdevs_discovered": 4, 00:10:39.529 "num_base_bdevs_operational": 4, 00:10:39.529 "base_bdevs_list": [ 00:10:39.529 { 00:10:39.529 "name": "pt1", 00:10:39.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.529 "is_configured": true, 00:10:39.529 "data_offset": 2048, 00:10:39.529 "data_size": 63488 00:10:39.529 }, 00:10:39.529 { 00:10:39.529 "name": "pt2", 00:10:39.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.529 "is_configured": true, 00:10:39.529 "data_offset": 2048, 00:10:39.529 "data_size": 63488 00:10:39.529 }, 00:10:39.529 { 00:10:39.529 "name": "pt3", 00:10:39.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.529 "is_configured": true, 00:10:39.529 "data_offset": 2048, 00:10:39.529 "data_size": 63488 00:10:39.529 }, 00:10:39.529 { 00:10:39.529 "name": "pt4", 00:10:39.529 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:39.529 "is_configured": true, 00:10:39.529 "data_offset": 2048, 00:10:39.529 "data_size": 63488 00:10:39.529 } 00:10:39.529 ] 00:10:39.529 }' 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.529 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.100 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:40.100 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:40.100 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.100 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.100 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.100 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.100 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:40.100 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.100 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.100 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.100 [2024-12-07 16:36:38.794728] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.100 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.100 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.100 "name": "raid_bdev1", 00:10:40.100 "aliases": [ 00:10:40.100 "92fb33dd-0447-40af-a93e-876bda5e5369" 00:10:40.100 ], 00:10:40.100 "product_name": "Raid Volume", 00:10:40.100 "block_size": 512, 00:10:40.100 "num_blocks": 253952, 00:10:40.100 "uuid": "92fb33dd-0447-40af-a93e-876bda5e5369", 00:10:40.100 "assigned_rate_limits": { 00:10:40.100 "rw_ios_per_sec": 0, 00:10:40.100 "rw_mbytes_per_sec": 0, 00:10:40.100 "r_mbytes_per_sec": 0, 00:10:40.100 "w_mbytes_per_sec": 0 00:10:40.100 }, 00:10:40.100 "claimed": false, 00:10:40.100 "zoned": false, 00:10:40.100 "supported_io_types": { 00:10:40.100 "read": true, 00:10:40.100 "write": true, 00:10:40.100 "unmap": true, 00:10:40.100 "flush": true, 00:10:40.100 "reset": true, 00:10:40.100 "nvme_admin": false, 00:10:40.100 "nvme_io": false, 00:10:40.100 "nvme_io_md": false, 00:10:40.100 "write_zeroes": true, 00:10:40.100 "zcopy": false, 00:10:40.100 "get_zone_info": false, 00:10:40.100 "zone_management": false, 00:10:40.100 "zone_append": false, 00:10:40.100 "compare": false, 00:10:40.100 "compare_and_write": false, 00:10:40.100 "abort": false, 00:10:40.100 "seek_hole": false, 00:10:40.100 "seek_data": false, 00:10:40.100 "copy": false, 00:10:40.100 "nvme_iov_md": false 00:10:40.100 }, 00:10:40.100 "memory_domains": [ 00:10:40.100 { 00:10:40.100 "dma_device_id": "system", 00:10:40.100 "dma_device_type": 1 00:10:40.100 }, 00:10:40.100 { 00:10:40.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.100 "dma_device_type": 2 00:10:40.100 }, 00:10:40.100 { 00:10:40.100 "dma_device_id": "system", 00:10:40.100 "dma_device_type": 1 00:10:40.100 }, 00:10:40.100 { 00:10:40.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.100 "dma_device_type": 2 00:10:40.100 }, 00:10:40.100 { 00:10:40.100 "dma_device_id": "system", 00:10:40.100 "dma_device_type": 1 00:10:40.100 }, 00:10:40.100 { 00:10:40.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.100 "dma_device_type": 2 00:10:40.100 }, 00:10:40.100 { 00:10:40.100 "dma_device_id": "system", 00:10:40.100 "dma_device_type": 1 00:10:40.100 }, 00:10:40.100 { 00:10:40.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.100 "dma_device_type": 2 00:10:40.100 } 00:10:40.100 ], 00:10:40.100 "driver_specific": { 00:10:40.100 "raid": { 00:10:40.100 "uuid": "92fb33dd-0447-40af-a93e-876bda5e5369", 00:10:40.100 "strip_size_kb": 64, 00:10:40.100 "state": "online", 00:10:40.100 "raid_level": "concat", 00:10:40.100 "superblock": true, 00:10:40.100 "num_base_bdevs": 4, 00:10:40.100 "num_base_bdevs_discovered": 4, 00:10:40.100 "num_base_bdevs_operational": 4, 00:10:40.100 "base_bdevs_list": [ 00:10:40.100 { 00:10:40.100 "name": "pt1", 00:10:40.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:40.100 "is_configured": true, 00:10:40.100 "data_offset": 2048, 00:10:40.100 "data_size": 63488 00:10:40.100 }, 00:10:40.100 { 00:10:40.100 "name": "pt2", 00:10:40.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.100 "is_configured": true, 00:10:40.100 "data_offset": 2048, 00:10:40.100 "data_size": 63488 00:10:40.100 }, 00:10:40.100 { 00:10:40.100 "name": "pt3", 00:10:40.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.100 "is_configured": true, 00:10:40.100 "data_offset": 2048, 00:10:40.100 "data_size": 63488 00:10:40.100 }, 00:10:40.100 { 00:10:40.100 "name": "pt4", 00:10:40.101 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:40.101 "is_configured": true, 00:10:40.101 "data_offset": 2048, 00:10:40.101 "data_size": 63488 00:10:40.101 } 00:10:40.101 ] 00:10:40.101 } 00:10:40.101 } 00:10:40.101 }' 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:40.101 pt2 00:10:40.101 pt3 00:10:40.101 pt4' 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.101 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.361 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:40.361 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.361 16:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.361 16:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.361 [2024-12-07 16:36:39.110068] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 92fb33dd-0447-40af-a93e-876bda5e5369 '!=' 92fb33dd-0447-40af-a93e-876bda5e5369 ']' 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83779 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83779 ']' 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83779 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83779 00:10:40.361 killing process with pid 83779 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83779' 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83779 00:10:40.361 [2024-12-07 16:36:39.199881] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.361 [2024-12-07 16:36:39.199988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.361 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83779 00:10:40.361 [2024-12-07 16:36:39.200067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.361 [2024-12-07 16:36:39.200081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:40.621 [2024-12-07 16:36:39.281929] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.881 16:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:40.881 00:10:40.881 real 0m4.372s 00:10:40.881 user 0m6.637s 00:10:40.881 sys 0m1.099s 00:10:40.881 ************************************ 00:10:40.881 END TEST raid_superblock_test 00:10:40.881 ************************************ 00:10:40.881 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.881 16:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.881 16:36:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:40.881 16:36:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:40.881 16:36:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.881 16:36:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.881 ************************************ 00:10:40.881 START TEST raid_read_error_test 00:10:40.881 ************************************ 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HYEh8GzWYD 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84027 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84027 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 84027 ']' 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.881 16:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.142 [2024-12-07 16:36:39.843409] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:41.142 [2024-12-07 16:36:39.843684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84027 ] 00:10:41.142 [2024-12-07 16:36:39.990229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.401 [2024-12-07 16:36:40.062691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.401 [2024-12-07 16:36:40.141393] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.401 [2024-12-07 16:36:40.141530] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.971 BaseBdev1_malloc 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.971 true 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.971 [2024-12-07 16:36:40.725245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:41.971 [2024-12-07 16:36:40.725346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.971 [2024-12-07 16:36:40.725375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:41.971 [2024-12-07 16:36:40.725385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.971 [2024-12-07 16:36:40.727794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.971 [2024-12-07 16:36:40.727830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:41.971 BaseBdev1 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.971 BaseBdev2_malloc 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.971 true 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.971 [2024-12-07 16:36:40.782948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:41.971 [2024-12-07 16:36:40.783017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.971 [2024-12-07 16:36:40.783040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:41.971 [2024-12-07 16:36:40.783050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.971 [2024-12-07 16:36:40.785532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.971 [2024-12-07 16:36:40.785564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:41.971 BaseBdev2 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.971 BaseBdev3_malloc 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.971 true 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.971 [2024-12-07 16:36:40.829802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:41.971 [2024-12-07 16:36:40.829852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.971 [2024-12-07 16:36:40.829873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:41.971 [2024-12-07 16:36:40.829882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.971 [2024-12-07 16:36:40.832316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.971 [2024-12-07 16:36:40.832405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:41.971 BaseBdev3 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.971 BaseBdev4_malloc 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.971 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.232 true 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.232 [2024-12-07 16:36:40.877389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:42.232 [2024-12-07 16:36:40.877506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.232 [2024-12-07 16:36:40.877538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:42.232 [2024-12-07 16:36:40.877548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.232 [2024-12-07 16:36:40.880015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.232 [2024-12-07 16:36:40.880051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:42.232 BaseBdev4 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.232 [2024-12-07 16:36:40.889424] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.232 [2024-12-07 16:36:40.891647] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.232 [2024-12-07 16:36:40.891738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.232 [2024-12-07 16:36:40.891795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:42.232 [2024-12-07 16:36:40.892004] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:42.232 [2024-12-07 16:36:40.892017] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:42.232 [2024-12-07 16:36:40.892308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:42.232 [2024-12-07 16:36:40.892494] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:42.232 [2024-12-07 16:36:40.892528] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:42.232 [2024-12-07 16:36:40.892656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.232 "name": "raid_bdev1", 00:10:42.232 "uuid": "6418ce7d-c50c-43a3-b3f7-bfb7cd65e705", 00:10:42.232 "strip_size_kb": 64, 00:10:42.232 "state": "online", 00:10:42.232 "raid_level": "concat", 00:10:42.232 "superblock": true, 00:10:42.232 "num_base_bdevs": 4, 00:10:42.232 "num_base_bdevs_discovered": 4, 00:10:42.232 "num_base_bdevs_operational": 4, 00:10:42.232 "base_bdevs_list": [ 00:10:42.232 { 00:10:42.232 "name": "BaseBdev1", 00:10:42.232 "uuid": "7f590c42-b75a-5d25-a193-ba805f71eed5", 00:10:42.232 "is_configured": true, 00:10:42.232 "data_offset": 2048, 00:10:42.232 "data_size": 63488 00:10:42.232 }, 00:10:42.232 { 00:10:42.232 "name": "BaseBdev2", 00:10:42.232 "uuid": "1f5a3e44-b105-51e2-be29-92b2be5c3ba3", 00:10:42.232 "is_configured": true, 00:10:42.232 "data_offset": 2048, 00:10:42.232 "data_size": 63488 00:10:42.232 }, 00:10:42.232 { 00:10:42.232 "name": "BaseBdev3", 00:10:42.232 "uuid": "d20da963-8fe0-561f-a85d-1788c0712af7", 00:10:42.232 "is_configured": true, 00:10:42.232 "data_offset": 2048, 00:10:42.232 "data_size": 63488 00:10:42.232 }, 00:10:42.232 { 00:10:42.232 "name": "BaseBdev4", 00:10:42.232 "uuid": "05e03009-d84b-5f1e-ba5f-170e19e5027a", 00:10:42.232 "is_configured": true, 00:10:42.232 "data_offset": 2048, 00:10:42.232 "data_size": 63488 00:10:42.232 } 00:10:42.232 ] 00:10:42.232 }' 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.232 16:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.492 16:36:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:42.492 16:36:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:42.752 [2024-12-07 16:36:41.400997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.693 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.693 "name": "raid_bdev1", 00:10:43.693 "uuid": "6418ce7d-c50c-43a3-b3f7-bfb7cd65e705", 00:10:43.693 "strip_size_kb": 64, 00:10:43.693 "state": "online", 00:10:43.694 "raid_level": "concat", 00:10:43.694 "superblock": true, 00:10:43.694 "num_base_bdevs": 4, 00:10:43.694 "num_base_bdevs_discovered": 4, 00:10:43.694 "num_base_bdevs_operational": 4, 00:10:43.694 "base_bdevs_list": [ 00:10:43.694 { 00:10:43.694 "name": "BaseBdev1", 00:10:43.694 "uuid": "7f590c42-b75a-5d25-a193-ba805f71eed5", 00:10:43.694 "is_configured": true, 00:10:43.694 "data_offset": 2048, 00:10:43.694 "data_size": 63488 00:10:43.694 }, 00:10:43.694 { 00:10:43.694 "name": "BaseBdev2", 00:10:43.694 "uuid": "1f5a3e44-b105-51e2-be29-92b2be5c3ba3", 00:10:43.694 "is_configured": true, 00:10:43.694 "data_offset": 2048, 00:10:43.694 "data_size": 63488 00:10:43.694 }, 00:10:43.694 { 00:10:43.694 "name": "BaseBdev3", 00:10:43.694 "uuid": "d20da963-8fe0-561f-a85d-1788c0712af7", 00:10:43.694 "is_configured": true, 00:10:43.694 "data_offset": 2048, 00:10:43.694 "data_size": 63488 00:10:43.694 }, 00:10:43.694 { 00:10:43.694 "name": "BaseBdev4", 00:10:43.694 "uuid": "05e03009-d84b-5f1e-ba5f-170e19e5027a", 00:10:43.694 "is_configured": true, 00:10:43.694 "data_offset": 2048, 00:10:43.694 "data_size": 63488 00:10:43.694 } 00:10:43.694 ] 00:10:43.694 }' 00:10:43.694 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.694 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.955 [2024-12-07 16:36:42.793794] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:43.955 [2024-12-07 16:36:42.793892] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.955 [2024-12-07 16:36:42.796422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.955 [2024-12-07 16:36:42.796521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.955 [2024-12-07 16:36:42.796593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.955 [2024-12-07 16:36:42.796638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:43.955 { 00:10:43.955 "results": [ 00:10:43.955 { 00:10:43.955 "job": "raid_bdev1", 00:10:43.955 "core_mask": "0x1", 00:10:43.955 "workload": "randrw", 00:10:43.955 "percentage": 50, 00:10:43.955 "status": "finished", 00:10:43.955 "queue_depth": 1, 00:10:43.955 "io_size": 131072, 00:10:43.955 "runtime": 1.393361, 00:10:43.955 "iops": 14347.322768471344, 00:10:43.955 "mibps": 1793.415346058918, 00:10:43.955 "io_failed": 1, 00:10:43.955 "io_timeout": 0, 00:10:43.955 "avg_latency_us": 97.98590388120313, 00:10:43.955 "min_latency_us": 25.152838427947597, 00:10:43.955 "max_latency_us": 1566.8541484716156 00:10:43.955 } 00:10:43.955 ], 00:10:43.955 "core_count": 1 00:10:43.955 } 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84027 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 84027 ']' 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 84027 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84027 00:10:43.955 killing process with pid 84027 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84027' 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 84027 00:10:43.955 [2024-12-07 16:36:42.844889] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:43.955 16:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 84027 00:10:44.215 [2024-12-07 16:36:42.912034] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:44.475 16:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HYEh8GzWYD 00:10:44.475 16:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:44.475 16:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:44.475 ************************************ 00:10:44.475 END TEST raid_read_error_test 00:10:44.475 ************************************ 00:10:44.475 16:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:44.475 16:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:44.475 16:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:44.475 16:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:44.475 16:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:44.475 00:10:44.475 real 0m3.560s 00:10:44.475 user 0m4.314s 00:10:44.475 sys 0m0.668s 00:10:44.475 16:36:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.475 16:36:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.475 16:36:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:44.475 16:36:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:44.475 16:36:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:44.475 16:36:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:44.475 ************************************ 00:10:44.475 START TEST raid_write_error_test 00:10:44.475 ************************************ 00:10:44.475 16:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:10:44.475 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:44.475 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:44.475 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:44.475 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:44.475 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:44.475 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:44.475 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:44.475 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:44.475 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xk8yH4xQ5g 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84168 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84168 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 84168 ']' 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:44.736 16:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.736 [2024-12-07 16:36:43.478031] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:44.736 [2024-12-07 16:36:43.478171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84168 ] 00:10:44.996 [2024-12-07 16:36:43.643186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.996 [2024-12-07 16:36:43.717147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.996 [2024-12-07 16:36:43.793457] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.996 [2024-12-07 16:36:43.793502] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.566 BaseBdev1_malloc 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.566 true 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.566 [2024-12-07 16:36:44.319674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:45.566 [2024-12-07 16:36:44.319747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.566 [2024-12-07 16:36:44.319771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:45.566 [2024-12-07 16:36:44.319781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.566 [2024-12-07 16:36:44.322272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.566 [2024-12-07 16:36:44.322309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:45.566 BaseBdev1 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.566 BaseBdev2_malloc 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.566 true 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.566 [2024-12-07 16:36:44.375157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:45.566 [2024-12-07 16:36:44.375209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.566 [2024-12-07 16:36:44.375229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:45.566 [2024-12-07 16:36:44.375238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.566 [2024-12-07 16:36:44.377542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.566 [2024-12-07 16:36:44.377575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:45.566 BaseBdev2 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.566 BaseBdev3_malloc 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.566 true 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.566 [2024-12-07 16:36:44.421755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:45.566 [2024-12-07 16:36:44.421798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.566 [2024-12-07 16:36:44.421816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:45.566 [2024-12-07 16:36:44.421825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.566 [2024-12-07 16:36:44.424127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.566 [2024-12-07 16:36:44.424243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:45.566 BaseBdev3 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.566 BaseBdev4_malloc 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.566 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.826 true 00:10:45.826 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.826 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:45.826 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.826 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.826 [2024-12-07 16:36:44.468342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:45.826 [2024-12-07 16:36:44.468397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.826 [2024-12-07 16:36:44.468420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:45.826 [2024-12-07 16:36:44.468430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.826 [2024-12-07 16:36:44.470677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.826 [2024-12-07 16:36:44.470764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:45.826 BaseBdev4 00:10:45.826 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.826 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:45.826 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.826 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.826 [2024-12-07 16:36:44.480385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.826 [2024-12-07 16:36:44.482407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.826 [2024-12-07 16:36:44.482493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.826 [2024-12-07 16:36:44.482546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:45.826 [2024-12-07 16:36:44.482742] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:45.826 [2024-12-07 16:36:44.482753] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:45.826 [2024-12-07 16:36:44.483039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:45.826 [2024-12-07 16:36:44.483175] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:45.826 [2024-12-07 16:36:44.483188] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:45.826 [2024-12-07 16:36:44.483315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.826 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.826 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:45.826 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.826 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.827 "name": "raid_bdev1", 00:10:45.827 "uuid": "d6254ff1-2b90-4604-84ee-6d5b33a27d2e", 00:10:45.827 "strip_size_kb": 64, 00:10:45.827 "state": "online", 00:10:45.827 "raid_level": "concat", 00:10:45.827 "superblock": true, 00:10:45.827 "num_base_bdevs": 4, 00:10:45.827 "num_base_bdevs_discovered": 4, 00:10:45.827 "num_base_bdevs_operational": 4, 00:10:45.827 "base_bdevs_list": [ 00:10:45.827 { 00:10:45.827 "name": "BaseBdev1", 00:10:45.827 "uuid": "0b2dd058-eaf7-5437-8b8e-733c2ba51a5d", 00:10:45.827 "is_configured": true, 00:10:45.827 "data_offset": 2048, 00:10:45.827 "data_size": 63488 00:10:45.827 }, 00:10:45.827 { 00:10:45.827 "name": "BaseBdev2", 00:10:45.827 "uuid": "71bd42b5-f364-5664-934a-5850ad35d73b", 00:10:45.827 "is_configured": true, 00:10:45.827 "data_offset": 2048, 00:10:45.827 "data_size": 63488 00:10:45.827 }, 00:10:45.827 { 00:10:45.827 "name": "BaseBdev3", 00:10:45.827 "uuid": "7c1c2563-ec88-5d13-b841-f939b3cd50f0", 00:10:45.827 "is_configured": true, 00:10:45.827 "data_offset": 2048, 00:10:45.827 "data_size": 63488 00:10:45.827 }, 00:10:45.827 { 00:10:45.827 "name": "BaseBdev4", 00:10:45.827 "uuid": "81e1f0dd-bb0a-5ee3-ad69-0634d979340f", 00:10:45.827 "is_configured": true, 00:10:45.827 "data_offset": 2048, 00:10:45.827 "data_size": 63488 00:10:45.827 } 00:10:45.827 ] 00:10:45.827 }' 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.827 16:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.086 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:46.086 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:46.345 [2024-12-07 16:36:45.015954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.328 "name": "raid_bdev1", 00:10:47.328 "uuid": "d6254ff1-2b90-4604-84ee-6d5b33a27d2e", 00:10:47.328 "strip_size_kb": 64, 00:10:47.328 "state": "online", 00:10:47.328 "raid_level": "concat", 00:10:47.328 "superblock": true, 00:10:47.328 "num_base_bdevs": 4, 00:10:47.328 "num_base_bdevs_discovered": 4, 00:10:47.328 "num_base_bdevs_operational": 4, 00:10:47.328 "base_bdevs_list": [ 00:10:47.328 { 00:10:47.328 "name": "BaseBdev1", 00:10:47.328 "uuid": "0b2dd058-eaf7-5437-8b8e-733c2ba51a5d", 00:10:47.328 "is_configured": true, 00:10:47.328 "data_offset": 2048, 00:10:47.328 "data_size": 63488 00:10:47.328 }, 00:10:47.328 { 00:10:47.328 "name": "BaseBdev2", 00:10:47.328 "uuid": "71bd42b5-f364-5664-934a-5850ad35d73b", 00:10:47.328 "is_configured": true, 00:10:47.328 "data_offset": 2048, 00:10:47.328 "data_size": 63488 00:10:47.328 }, 00:10:47.328 { 00:10:47.328 "name": "BaseBdev3", 00:10:47.328 "uuid": "7c1c2563-ec88-5d13-b841-f939b3cd50f0", 00:10:47.328 "is_configured": true, 00:10:47.328 "data_offset": 2048, 00:10:47.328 "data_size": 63488 00:10:47.328 }, 00:10:47.328 { 00:10:47.328 "name": "BaseBdev4", 00:10:47.328 "uuid": "81e1f0dd-bb0a-5ee3-ad69-0634d979340f", 00:10:47.328 "is_configured": true, 00:10:47.328 "data_offset": 2048, 00:10:47.328 "data_size": 63488 00:10:47.328 } 00:10:47.328 ] 00:10:47.328 }' 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.328 16:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.594 [2024-12-07 16:36:46.416804] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.594 [2024-12-07 16:36:46.416842] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.594 [2024-12-07 16:36:46.419393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.594 [2024-12-07 16:36:46.419455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.594 [2024-12-07 16:36:46.419507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.594 [2024-12-07 16:36:46.419517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:47.594 { 00:10:47.594 "results": [ 00:10:47.594 { 00:10:47.594 "job": "raid_bdev1", 00:10:47.594 "core_mask": "0x1", 00:10:47.594 "workload": "randrw", 00:10:47.594 "percentage": 50, 00:10:47.594 "status": "finished", 00:10:47.594 "queue_depth": 1, 00:10:47.594 "io_size": 131072, 00:10:47.594 "runtime": 1.401387, 00:10:47.594 "iops": 14384.320676586838, 00:10:47.594 "mibps": 1798.0400845733548, 00:10:47.594 "io_failed": 1, 00:10:47.594 "io_timeout": 0, 00:10:47.594 "avg_latency_us": 97.80907150598159, 00:10:47.594 "min_latency_us": 24.929257641921396, 00:10:47.594 "max_latency_us": 1380.8349344978167 00:10:47.594 } 00:10:47.594 ], 00:10:47.594 "core_count": 1 00:10:47.594 } 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84168 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 84168 ']' 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 84168 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84168 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84168' 00:10:47.594 killing process with pid 84168 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 84168 00:10:47.594 [2024-12-07 16:36:46.469157] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.594 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 84168 00:10:47.861 [2024-12-07 16:36:46.536248] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.120 16:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xk8yH4xQ5g 00:10:48.120 16:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:48.120 16:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:48.120 16:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:48.120 16:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:48.120 16:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.120 16:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:48.121 16:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:48.121 00:10:48.121 real 0m3.550s 00:10:48.121 user 0m4.303s 00:10:48.121 sys 0m0.685s 00:10:48.121 ************************************ 00:10:48.121 END TEST raid_write_error_test 00:10:48.121 ************************************ 00:10:48.121 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.121 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.121 16:36:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:48.121 16:36:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:48.121 16:36:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:48.121 16:36:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.121 16:36:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.121 ************************************ 00:10:48.121 START TEST raid_state_function_test 00:10:48.121 ************************************ 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:48.121 16:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:48.121 Process raid pid: 84295 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84295 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84295' 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84295 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 84295 ']' 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.121 16:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.380 [2024-12-07 16:36:47.088813] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:48.380 [2024-12-07 16:36:47.088966] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.380 [2024-12-07 16:36:47.254174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.651 [2024-12-07 16:36:47.331666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.651 [2024-12-07 16:36:47.409382] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.651 [2024-12-07 16:36:47.409424] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.222 [2024-12-07 16:36:47.972896] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.222 [2024-12-07 16:36:47.972960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.222 [2024-12-07 16:36:47.972980] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.222 [2024-12-07 16:36:47.972991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.222 [2024-12-07 16:36:47.973000] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.222 [2024-12-07 16:36:47.973015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.222 [2024-12-07 16:36:47.973020] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.222 [2024-12-07 16:36:47.973030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.222 16:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.222 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.222 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.222 "name": "Existed_Raid", 00:10:49.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.222 "strip_size_kb": 0, 00:10:49.222 "state": "configuring", 00:10:49.222 "raid_level": "raid1", 00:10:49.222 "superblock": false, 00:10:49.222 "num_base_bdevs": 4, 00:10:49.222 "num_base_bdevs_discovered": 0, 00:10:49.222 "num_base_bdevs_operational": 4, 00:10:49.222 "base_bdevs_list": [ 00:10:49.222 { 00:10:49.222 "name": "BaseBdev1", 00:10:49.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.222 "is_configured": false, 00:10:49.222 "data_offset": 0, 00:10:49.222 "data_size": 0 00:10:49.222 }, 00:10:49.222 { 00:10:49.222 "name": "BaseBdev2", 00:10:49.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.222 "is_configured": false, 00:10:49.222 "data_offset": 0, 00:10:49.222 "data_size": 0 00:10:49.222 }, 00:10:49.222 { 00:10:49.222 "name": "BaseBdev3", 00:10:49.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.222 "is_configured": false, 00:10:49.222 "data_offset": 0, 00:10:49.222 "data_size": 0 00:10:49.222 }, 00:10:49.222 { 00:10:49.222 "name": "BaseBdev4", 00:10:49.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.222 "is_configured": false, 00:10:49.222 "data_offset": 0, 00:10:49.222 "data_size": 0 00:10:49.222 } 00:10:49.222 ] 00:10:49.222 }' 00:10:49.222 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.222 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.791 [2024-12-07 16:36:48.459989] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.791 [2024-12-07 16:36:48.460042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.791 [2024-12-07 16:36:48.468019] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.791 [2024-12-07 16:36:48.468067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.791 [2024-12-07 16:36:48.468076] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.791 [2024-12-07 16:36:48.468086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.791 [2024-12-07 16:36:48.468092] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.791 [2024-12-07 16:36:48.468102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.791 [2024-12-07 16:36:48.468109] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.791 [2024-12-07 16:36:48.468119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.791 [2024-12-07 16:36:48.491107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.791 BaseBdev1 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.791 [ 00:10:49.791 { 00:10:49.791 "name": "BaseBdev1", 00:10:49.791 "aliases": [ 00:10:49.791 "80106896-0f85-4b63-a79e-32f17dd675c0" 00:10:49.791 ], 00:10:49.791 "product_name": "Malloc disk", 00:10:49.791 "block_size": 512, 00:10:49.791 "num_blocks": 65536, 00:10:49.791 "uuid": "80106896-0f85-4b63-a79e-32f17dd675c0", 00:10:49.791 "assigned_rate_limits": { 00:10:49.791 "rw_ios_per_sec": 0, 00:10:49.791 "rw_mbytes_per_sec": 0, 00:10:49.791 "r_mbytes_per_sec": 0, 00:10:49.791 "w_mbytes_per_sec": 0 00:10:49.791 }, 00:10:49.791 "claimed": true, 00:10:49.791 "claim_type": "exclusive_write", 00:10:49.791 "zoned": false, 00:10:49.791 "supported_io_types": { 00:10:49.791 "read": true, 00:10:49.791 "write": true, 00:10:49.791 "unmap": true, 00:10:49.791 "flush": true, 00:10:49.791 "reset": true, 00:10:49.791 "nvme_admin": false, 00:10:49.791 "nvme_io": false, 00:10:49.791 "nvme_io_md": false, 00:10:49.791 "write_zeroes": true, 00:10:49.791 "zcopy": true, 00:10:49.791 "get_zone_info": false, 00:10:49.791 "zone_management": false, 00:10:49.791 "zone_append": false, 00:10:49.791 "compare": false, 00:10:49.791 "compare_and_write": false, 00:10:49.791 "abort": true, 00:10:49.791 "seek_hole": false, 00:10:49.791 "seek_data": false, 00:10:49.791 "copy": true, 00:10:49.791 "nvme_iov_md": false 00:10:49.791 }, 00:10:49.791 "memory_domains": [ 00:10:49.791 { 00:10:49.791 "dma_device_id": "system", 00:10:49.791 "dma_device_type": 1 00:10:49.791 }, 00:10:49.791 { 00:10:49.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.791 "dma_device_type": 2 00:10:49.791 } 00:10:49.791 ], 00:10:49.791 "driver_specific": {} 00:10:49.791 } 00:10:49.791 ] 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.791 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.791 "name": "Existed_Raid", 00:10:49.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.791 "strip_size_kb": 0, 00:10:49.791 "state": "configuring", 00:10:49.791 "raid_level": "raid1", 00:10:49.791 "superblock": false, 00:10:49.791 "num_base_bdevs": 4, 00:10:49.791 "num_base_bdevs_discovered": 1, 00:10:49.791 "num_base_bdevs_operational": 4, 00:10:49.791 "base_bdevs_list": [ 00:10:49.791 { 00:10:49.791 "name": "BaseBdev1", 00:10:49.791 "uuid": "80106896-0f85-4b63-a79e-32f17dd675c0", 00:10:49.791 "is_configured": true, 00:10:49.791 "data_offset": 0, 00:10:49.791 "data_size": 65536 00:10:49.791 }, 00:10:49.791 { 00:10:49.792 "name": "BaseBdev2", 00:10:49.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.792 "is_configured": false, 00:10:49.792 "data_offset": 0, 00:10:49.792 "data_size": 0 00:10:49.792 }, 00:10:49.792 { 00:10:49.792 "name": "BaseBdev3", 00:10:49.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.792 "is_configured": false, 00:10:49.792 "data_offset": 0, 00:10:49.792 "data_size": 0 00:10:49.792 }, 00:10:49.792 { 00:10:49.792 "name": "BaseBdev4", 00:10:49.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.792 "is_configured": false, 00:10:49.792 "data_offset": 0, 00:10:49.792 "data_size": 0 00:10:49.792 } 00:10:49.792 ] 00:10:49.792 }' 00:10:49.792 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.792 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.052 [2024-12-07 16:36:48.922428] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.052 [2024-12-07 16:36:48.922592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.052 [2024-12-07 16:36:48.930439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.052 [2024-12-07 16:36:48.932714] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.052 [2024-12-07 16:36:48.932752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.052 [2024-12-07 16:36:48.932763] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:50.052 [2024-12-07 16:36:48.932771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:50.052 [2024-12-07 16:36:48.932778] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:50.052 [2024-12-07 16:36:48.932786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.052 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.313 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.313 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.313 "name": "Existed_Raid", 00:10:50.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.313 "strip_size_kb": 0, 00:10:50.313 "state": "configuring", 00:10:50.313 "raid_level": "raid1", 00:10:50.313 "superblock": false, 00:10:50.313 "num_base_bdevs": 4, 00:10:50.313 "num_base_bdevs_discovered": 1, 00:10:50.313 "num_base_bdevs_operational": 4, 00:10:50.313 "base_bdevs_list": [ 00:10:50.313 { 00:10:50.313 "name": "BaseBdev1", 00:10:50.313 "uuid": "80106896-0f85-4b63-a79e-32f17dd675c0", 00:10:50.313 "is_configured": true, 00:10:50.313 "data_offset": 0, 00:10:50.313 "data_size": 65536 00:10:50.313 }, 00:10:50.313 { 00:10:50.313 "name": "BaseBdev2", 00:10:50.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.313 "is_configured": false, 00:10:50.313 "data_offset": 0, 00:10:50.313 "data_size": 0 00:10:50.313 }, 00:10:50.313 { 00:10:50.313 "name": "BaseBdev3", 00:10:50.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.313 "is_configured": false, 00:10:50.313 "data_offset": 0, 00:10:50.313 "data_size": 0 00:10:50.313 }, 00:10:50.313 { 00:10:50.313 "name": "BaseBdev4", 00:10:50.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.313 "is_configured": false, 00:10:50.313 "data_offset": 0, 00:10:50.313 "data_size": 0 00:10:50.313 } 00:10:50.313 ] 00:10:50.313 }' 00:10:50.313 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.313 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.574 [2024-12-07 16:36:49.405691] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.574 BaseBdev2 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.574 [ 00:10:50.574 { 00:10:50.574 "name": "BaseBdev2", 00:10:50.574 "aliases": [ 00:10:50.574 "daadcb29-28dd-45a4-8d8a-b8af7e4f44d1" 00:10:50.574 ], 00:10:50.574 "product_name": "Malloc disk", 00:10:50.574 "block_size": 512, 00:10:50.574 "num_blocks": 65536, 00:10:50.574 "uuid": "daadcb29-28dd-45a4-8d8a-b8af7e4f44d1", 00:10:50.574 "assigned_rate_limits": { 00:10:50.574 "rw_ios_per_sec": 0, 00:10:50.574 "rw_mbytes_per_sec": 0, 00:10:50.574 "r_mbytes_per_sec": 0, 00:10:50.574 "w_mbytes_per_sec": 0 00:10:50.574 }, 00:10:50.574 "claimed": true, 00:10:50.574 "claim_type": "exclusive_write", 00:10:50.574 "zoned": false, 00:10:50.574 "supported_io_types": { 00:10:50.574 "read": true, 00:10:50.574 "write": true, 00:10:50.574 "unmap": true, 00:10:50.574 "flush": true, 00:10:50.574 "reset": true, 00:10:50.574 "nvme_admin": false, 00:10:50.574 "nvme_io": false, 00:10:50.574 "nvme_io_md": false, 00:10:50.574 "write_zeroes": true, 00:10:50.574 "zcopy": true, 00:10:50.574 "get_zone_info": false, 00:10:50.574 "zone_management": false, 00:10:50.574 "zone_append": false, 00:10:50.574 "compare": false, 00:10:50.574 "compare_and_write": false, 00:10:50.574 "abort": true, 00:10:50.574 "seek_hole": false, 00:10:50.574 "seek_data": false, 00:10:50.574 "copy": true, 00:10:50.574 "nvme_iov_md": false 00:10:50.574 }, 00:10:50.574 "memory_domains": [ 00:10:50.574 { 00:10:50.574 "dma_device_id": "system", 00:10:50.574 "dma_device_type": 1 00:10:50.574 }, 00:10:50.574 { 00:10:50.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.574 "dma_device_type": 2 00:10:50.574 } 00:10:50.574 ], 00:10:50.574 "driver_specific": {} 00:10:50.574 } 00:10:50.574 ] 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.574 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.835 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.835 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.835 "name": "Existed_Raid", 00:10:50.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.835 "strip_size_kb": 0, 00:10:50.835 "state": "configuring", 00:10:50.835 "raid_level": "raid1", 00:10:50.835 "superblock": false, 00:10:50.835 "num_base_bdevs": 4, 00:10:50.835 "num_base_bdevs_discovered": 2, 00:10:50.835 "num_base_bdevs_operational": 4, 00:10:50.835 "base_bdevs_list": [ 00:10:50.835 { 00:10:50.835 "name": "BaseBdev1", 00:10:50.835 "uuid": "80106896-0f85-4b63-a79e-32f17dd675c0", 00:10:50.835 "is_configured": true, 00:10:50.835 "data_offset": 0, 00:10:50.835 "data_size": 65536 00:10:50.835 }, 00:10:50.835 { 00:10:50.835 "name": "BaseBdev2", 00:10:50.835 "uuid": "daadcb29-28dd-45a4-8d8a-b8af7e4f44d1", 00:10:50.835 "is_configured": true, 00:10:50.835 "data_offset": 0, 00:10:50.835 "data_size": 65536 00:10:50.835 }, 00:10:50.835 { 00:10:50.835 "name": "BaseBdev3", 00:10:50.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.835 "is_configured": false, 00:10:50.835 "data_offset": 0, 00:10:50.835 "data_size": 0 00:10:50.835 }, 00:10:50.835 { 00:10:50.835 "name": "BaseBdev4", 00:10:50.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.835 "is_configured": false, 00:10:50.835 "data_offset": 0, 00:10:50.835 "data_size": 0 00:10:50.835 } 00:10:50.835 ] 00:10:50.835 }' 00:10:50.835 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.835 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.094 [2024-12-07 16:36:49.945987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.094 BaseBdev3 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.094 [ 00:10:51.094 { 00:10:51.094 "name": "BaseBdev3", 00:10:51.094 "aliases": [ 00:10:51.094 "a5396308-bbfa-4fbb-bb51-e940e1366110" 00:10:51.094 ], 00:10:51.094 "product_name": "Malloc disk", 00:10:51.094 "block_size": 512, 00:10:51.094 "num_blocks": 65536, 00:10:51.094 "uuid": "a5396308-bbfa-4fbb-bb51-e940e1366110", 00:10:51.094 "assigned_rate_limits": { 00:10:51.094 "rw_ios_per_sec": 0, 00:10:51.094 "rw_mbytes_per_sec": 0, 00:10:51.094 "r_mbytes_per_sec": 0, 00:10:51.094 "w_mbytes_per_sec": 0 00:10:51.094 }, 00:10:51.094 "claimed": true, 00:10:51.094 "claim_type": "exclusive_write", 00:10:51.094 "zoned": false, 00:10:51.094 "supported_io_types": { 00:10:51.094 "read": true, 00:10:51.094 "write": true, 00:10:51.094 "unmap": true, 00:10:51.094 "flush": true, 00:10:51.094 "reset": true, 00:10:51.094 "nvme_admin": false, 00:10:51.094 "nvme_io": false, 00:10:51.094 "nvme_io_md": false, 00:10:51.094 "write_zeroes": true, 00:10:51.094 "zcopy": true, 00:10:51.094 "get_zone_info": false, 00:10:51.094 "zone_management": false, 00:10:51.094 "zone_append": false, 00:10:51.094 "compare": false, 00:10:51.094 "compare_and_write": false, 00:10:51.094 "abort": true, 00:10:51.094 "seek_hole": false, 00:10:51.094 "seek_data": false, 00:10:51.094 "copy": true, 00:10:51.094 "nvme_iov_md": false 00:10:51.094 }, 00:10:51.094 "memory_domains": [ 00:10:51.094 { 00:10:51.094 "dma_device_id": "system", 00:10:51.094 "dma_device_type": 1 00:10:51.094 }, 00:10:51.094 { 00:10:51.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.094 "dma_device_type": 2 00:10:51.094 } 00:10:51.094 ], 00:10:51.094 "driver_specific": {} 00:10:51.094 } 00:10:51.094 ] 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.094 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.353 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.353 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.353 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.353 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.353 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.353 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.353 "name": "Existed_Raid", 00:10:51.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.353 "strip_size_kb": 0, 00:10:51.353 "state": "configuring", 00:10:51.353 "raid_level": "raid1", 00:10:51.353 "superblock": false, 00:10:51.353 "num_base_bdevs": 4, 00:10:51.353 "num_base_bdevs_discovered": 3, 00:10:51.353 "num_base_bdevs_operational": 4, 00:10:51.353 "base_bdevs_list": [ 00:10:51.353 { 00:10:51.353 "name": "BaseBdev1", 00:10:51.353 "uuid": "80106896-0f85-4b63-a79e-32f17dd675c0", 00:10:51.353 "is_configured": true, 00:10:51.353 "data_offset": 0, 00:10:51.353 "data_size": 65536 00:10:51.353 }, 00:10:51.353 { 00:10:51.353 "name": "BaseBdev2", 00:10:51.353 "uuid": "daadcb29-28dd-45a4-8d8a-b8af7e4f44d1", 00:10:51.353 "is_configured": true, 00:10:51.353 "data_offset": 0, 00:10:51.353 "data_size": 65536 00:10:51.353 }, 00:10:51.353 { 00:10:51.353 "name": "BaseBdev3", 00:10:51.353 "uuid": "a5396308-bbfa-4fbb-bb51-e940e1366110", 00:10:51.353 "is_configured": true, 00:10:51.353 "data_offset": 0, 00:10:51.353 "data_size": 65536 00:10:51.353 }, 00:10:51.353 { 00:10:51.353 "name": "BaseBdev4", 00:10:51.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.353 "is_configured": false, 00:10:51.353 "data_offset": 0, 00:10:51.353 "data_size": 0 00:10:51.353 } 00:10:51.353 ] 00:10:51.353 }' 00:10:51.353 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.353 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.613 [2024-12-07 16:36:50.370817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:51.613 [2024-12-07 16:36:50.370990] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:51.613 [2024-12-07 16:36:50.371008] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:51.613 [2024-12-07 16:36:50.371380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:51.613 [2024-12-07 16:36:50.371560] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:51.613 [2024-12-07 16:36:50.371575] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:51.613 [2024-12-07 16:36:50.371812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.613 BaseBdev4 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.613 [ 00:10:51.613 { 00:10:51.613 "name": "BaseBdev4", 00:10:51.613 "aliases": [ 00:10:51.613 "f0de216a-59de-4a2f-83a2-2762cb5d7b96" 00:10:51.613 ], 00:10:51.613 "product_name": "Malloc disk", 00:10:51.613 "block_size": 512, 00:10:51.613 "num_blocks": 65536, 00:10:51.613 "uuid": "f0de216a-59de-4a2f-83a2-2762cb5d7b96", 00:10:51.613 "assigned_rate_limits": { 00:10:51.613 "rw_ios_per_sec": 0, 00:10:51.613 "rw_mbytes_per_sec": 0, 00:10:51.613 "r_mbytes_per_sec": 0, 00:10:51.613 "w_mbytes_per_sec": 0 00:10:51.613 }, 00:10:51.613 "claimed": true, 00:10:51.613 "claim_type": "exclusive_write", 00:10:51.613 "zoned": false, 00:10:51.613 "supported_io_types": { 00:10:51.613 "read": true, 00:10:51.613 "write": true, 00:10:51.613 "unmap": true, 00:10:51.613 "flush": true, 00:10:51.613 "reset": true, 00:10:51.613 "nvme_admin": false, 00:10:51.613 "nvme_io": false, 00:10:51.613 "nvme_io_md": false, 00:10:51.613 "write_zeroes": true, 00:10:51.613 "zcopy": true, 00:10:51.613 "get_zone_info": false, 00:10:51.613 "zone_management": false, 00:10:51.613 "zone_append": false, 00:10:51.613 "compare": false, 00:10:51.613 "compare_and_write": false, 00:10:51.613 "abort": true, 00:10:51.613 "seek_hole": false, 00:10:51.613 "seek_data": false, 00:10:51.613 "copy": true, 00:10:51.613 "nvme_iov_md": false 00:10:51.613 }, 00:10:51.613 "memory_domains": [ 00:10:51.613 { 00:10:51.613 "dma_device_id": "system", 00:10:51.613 "dma_device_type": 1 00:10:51.613 }, 00:10:51.613 { 00:10:51.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.613 "dma_device_type": 2 00:10:51.613 } 00:10:51.613 ], 00:10:51.613 "driver_specific": {} 00:10:51.613 } 00:10:51.613 ] 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.613 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.614 "name": "Existed_Raid", 00:10:51.614 "uuid": "1984f7be-788f-407e-adb9-dbfa1db2a3c8", 00:10:51.614 "strip_size_kb": 0, 00:10:51.614 "state": "online", 00:10:51.614 "raid_level": "raid1", 00:10:51.614 "superblock": false, 00:10:51.614 "num_base_bdevs": 4, 00:10:51.614 "num_base_bdevs_discovered": 4, 00:10:51.614 "num_base_bdevs_operational": 4, 00:10:51.614 "base_bdevs_list": [ 00:10:51.614 { 00:10:51.614 "name": "BaseBdev1", 00:10:51.614 "uuid": "80106896-0f85-4b63-a79e-32f17dd675c0", 00:10:51.614 "is_configured": true, 00:10:51.614 "data_offset": 0, 00:10:51.614 "data_size": 65536 00:10:51.614 }, 00:10:51.614 { 00:10:51.614 "name": "BaseBdev2", 00:10:51.614 "uuid": "daadcb29-28dd-45a4-8d8a-b8af7e4f44d1", 00:10:51.614 "is_configured": true, 00:10:51.614 "data_offset": 0, 00:10:51.614 "data_size": 65536 00:10:51.614 }, 00:10:51.614 { 00:10:51.614 "name": "BaseBdev3", 00:10:51.614 "uuid": "a5396308-bbfa-4fbb-bb51-e940e1366110", 00:10:51.614 "is_configured": true, 00:10:51.614 "data_offset": 0, 00:10:51.614 "data_size": 65536 00:10:51.614 }, 00:10:51.614 { 00:10:51.614 "name": "BaseBdev4", 00:10:51.614 "uuid": "f0de216a-59de-4a2f-83a2-2762cb5d7b96", 00:10:51.614 "is_configured": true, 00:10:51.614 "data_offset": 0, 00:10:51.614 "data_size": 65536 00:10:51.614 } 00:10:51.614 ] 00:10:51.614 }' 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.614 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.183 [2024-12-07 16:36:50.882393] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.183 "name": "Existed_Raid", 00:10:52.183 "aliases": [ 00:10:52.183 "1984f7be-788f-407e-adb9-dbfa1db2a3c8" 00:10:52.183 ], 00:10:52.183 "product_name": "Raid Volume", 00:10:52.183 "block_size": 512, 00:10:52.183 "num_blocks": 65536, 00:10:52.183 "uuid": "1984f7be-788f-407e-adb9-dbfa1db2a3c8", 00:10:52.183 "assigned_rate_limits": { 00:10:52.183 "rw_ios_per_sec": 0, 00:10:52.183 "rw_mbytes_per_sec": 0, 00:10:52.183 "r_mbytes_per_sec": 0, 00:10:52.183 "w_mbytes_per_sec": 0 00:10:52.183 }, 00:10:52.183 "claimed": false, 00:10:52.183 "zoned": false, 00:10:52.183 "supported_io_types": { 00:10:52.183 "read": true, 00:10:52.183 "write": true, 00:10:52.183 "unmap": false, 00:10:52.183 "flush": false, 00:10:52.183 "reset": true, 00:10:52.183 "nvme_admin": false, 00:10:52.183 "nvme_io": false, 00:10:52.183 "nvme_io_md": false, 00:10:52.183 "write_zeroes": true, 00:10:52.183 "zcopy": false, 00:10:52.183 "get_zone_info": false, 00:10:52.183 "zone_management": false, 00:10:52.183 "zone_append": false, 00:10:52.183 "compare": false, 00:10:52.183 "compare_and_write": false, 00:10:52.183 "abort": false, 00:10:52.183 "seek_hole": false, 00:10:52.183 "seek_data": false, 00:10:52.183 "copy": false, 00:10:52.183 "nvme_iov_md": false 00:10:52.183 }, 00:10:52.183 "memory_domains": [ 00:10:52.183 { 00:10:52.183 "dma_device_id": "system", 00:10:52.183 "dma_device_type": 1 00:10:52.183 }, 00:10:52.183 { 00:10:52.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.183 "dma_device_type": 2 00:10:52.183 }, 00:10:52.183 { 00:10:52.183 "dma_device_id": "system", 00:10:52.183 "dma_device_type": 1 00:10:52.183 }, 00:10:52.183 { 00:10:52.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.183 "dma_device_type": 2 00:10:52.183 }, 00:10:52.183 { 00:10:52.183 "dma_device_id": "system", 00:10:52.183 "dma_device_type": 1 00:10:52.183 }, 00:10:52.183 { 00:10:52.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.183 "dma_device_type": 2 00:10:52.183 }, 00:10:52.183 { 00:10:52.183 "dma_device_id": "system", 00:10:52.183 "dma_device_type": 1 00:10:52.183 }, 00:10:52.183 { 00:10:52.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.183 "dma_device_type": 2 00:10:52.183 } 00:10:52.183 ], 00:10:52.183 "driver_specific": { 00:10:52.183 "raid": { 00:10:52.183 "uuid": "1984f7be-788f-407e-adb9-dbfa1db2a3c8", 00:10:52.183 "strip_size_kb": 0, 00:10:52.183 "state": "online", 00:10:52.183 "raid_level": "raid1", 00:10:52.183 "superblock": false, 00:10:52.183 "num_base_bdevs": 4, 00:10:52.183 "num_base_bdevs_discovered": 4, 00:10:52.183 "num_base_bdevs_operational": 4, 00:10:52.183 "base_bdevs_list": [ 00:10:52.183 { 00:10:52.183 "name": "BaseBdev1", 00:10:52.183 "uuid": "80106896-0f85-4b63-a79e-32f17dd675c0", 00:10:52.183 "is_configured": true, 00:10:52.183 "data_offset": 0, 00:10:52.183 "data_size": 65536 00:10:52.183 }, 00:10:52.183 { 00:10:52.183 "name": "BaseBdev2", 00:10:52.183 "uuid": "daadcb29-28dd-45a4-8d8a-b8af7e4f44d1", 00:10:52.183 "is_configured": true, 00:10:52.183 "data_offset": 0, 00:10:52.183 "data_size": 65536 00:10:52.183 }, 00:10:52.183 { 00:10:52.183 "name": "BaseBdev3", 00:10:52.183 "uuid": "a5396308-bbfa-4fbb-bb51-e940e1366110", 00:10:52.183 "is_configured": true, 00:10:52.183 "data_offset": 0, 00:10:52.183 "data_size": 65536 00:10:52.183 }, 00:10:52.183 { 00:10:52.183 "name": "BaseBdev4", 00:10:52.183 "uuid": "f0de216a-59de-4a2f-83a2-2762cb5d7b96", 00:10:52.183 "is_configured": true, 00:10:52.183 "data_offset": 0, 00:10:52.183 "data_size": 65536 00:10:52.183 } 00:10:52.183 ] 00:10:52.183 } 00:10:52.183 } 00:10:52.183 }' 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:52.183 BaseBdev2 00:10:52.183 BaseBdev3 00:10:52.183 BaseBdev4' 00:10:52.183 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.183 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.443 [2024-12-07 16:36:51.205507] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.443 "name": "Existed_Raid", 00:10:52.443 "uuid": "1984f7be-788f-407e-adb9-dbfa1db2a3c8", 00:10:52.443 "strip_size_kb": 0, 00:10:52.443 "state": "online", 00:10:52.443 "raid_level": "raid1", 00:10:52.443 "superblock": false, 00:10:52.443 "num_base_bdevs": 4, 00:10:52.443 "num_base_bdevs_discovered": 3, 00:10:52.443 "num_base_bdevs_operational": 3, 00:10:52.443 "base_bdevs_list": [ 00:10:52.443 { 00:10:52.443 "name": null, 00:10:52.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.443 "is_configured": false, 00:10:52.443 "data_offset": 0, 00:10:52.443 "data_size": 65536 00:10:52.443 }, 00:10:52.443 { 00:10:52.443 "name": "BaseBdev2", 00:10:52.443 "uuid": "daadcb29-28dd-45a4-8d8a-b8af7e4f44d1", 00:10:52.443 "is_configured": true, 00:10:52.443 "data_offset": 0, 00:10:52.443 "data_size": 65536 00:10:52.443 }, 00:10:52.443 { 00:10:52.443 "name": "BaseBdev3", 00:10:52.443 "uuid": "a5396308-bbfa-4fbb-bb51-e940e1366110", 00:10:52.443 "is_configured": true, 00:10:52.443 "data_offset": 0, 00:10:52.443 "data_size": 65536 00:10:52.443 }, 00:10:52.443 { 00:10:52.443 "name": "BaseBdev4", 00:10:52.443 "uuid": "f0de216a-59de-4a2f-83a2-2762cb5d7b96", 00:10:52.443 "is_configured": true, 00:10:52.443 "data_offset": 0, 00:10:52.443 "data_size": 65536 00:10:52.443 } 00:10:52.443 ] 00:10:52.443 }' 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.443 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.013 [2024-12-07 16:36:51.757512] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.013 [2024-12-07 16:36:51.842084] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:53.013 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.274 [2024-12-07 16:36:51.918617] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:53.274 [2024-12-07 16:36:51.918728] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.274 [2024-12-07 16:36:51.939890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.274 [2024-12-07 16:36:51.940023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.274 [2024-12-07 16:36:51.940044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.274 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.274 BaseBdev2 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.274 [ 00:10:53.274 { 00:10:53.274 "name": "BaseBdev2", 00:10:53.274 "aliases": [ 00:10:53.274 "6ee36cbc-08b6-4531-99f6-4c425a130f5c" 00:10:53.274 ], 00:10:53.274 "product_name": "Malloc disk", 00:10:53.274 "block_size": 512, 00:10:53.274 "num_blocks": 65536, 00:10:53.274 "uuid": "6ee36cbc-08b6-4531-99f6-4c425a130f5c", 00:10:53.274 "assigned_rate_limits": { 00:10:53.274 "rw_ios_per_sec": 0, 00:10:53.274 "rw_mbytes_per_sec": 0, 00:10:53.274 "r_mbytes_per_sec": 0, 00:10:53.274 "w_mbytes_per_sec": 0 00:10:53.274 }, 00:10:53.274 "claimed": false, 00:10:53.274 "zoned": false, 00:10:53.274 "supported_io_types": { 00:10:53.274 "read": true, 00:10:53.274 "write": true, 00:10:53.274 "unmap": true, 00:10:53.274 "flush": true, 00:10:53.274 "reset": true, 00:10:53.274 "nvme_admin": false, 00:10:53.274 "nvme_io": false, 00:10:53.274 "nvme_io_md": false, 00:10:53.274 "write_zeroes": true, 00:10:53.274 "zcopy": true, 00:10:53.274 "get_zone_info": false, 00:10:53.274 "zone_management": false, 00:10:53.274 "zone_append": false, 00:10:53.274 "compare": false, 00:10:53.274 "compare_and_write": false, 00:10:53.274 "abort": true, 00:10:53.274 "seek_hole": false, 00:10:53.274 "seek_data": false, 00:10:53.274 "copy": true, 00:10:53.274 "nvme_iov_md": false 00:10:53.274 }, 00:10:53.274 "memory_domains": [ 00:10:53.274 { 00:10:53.274 "dma_device_id": "system", 00:10:53.274 "dma_device_type": 1 00:10:53.274 }, 00:10:53.274 { 00:10:53.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.274 "dma_device_type": 2 00:10:53.274 } 00:10:53.274 ], 00:10:53.274 "driver_specific": {} 00:10:53.274 } 00:10:53.274 ] 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.274 BaseBdev3 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.274 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.274 [ 00:10:53.274 { 00:10:53.274 "name": "BaseBdev3", 00:10:53.274 "aliases": [ 00:10:53.274 "ebb79cb2-4f36-45b2-aca8-8c9bdaa6de54" 00:10:53.274 ], 00:10:53.274 "product_name": "Malloc disk", 00:10:53.274 "block_size": 512, 00:10:53.274 "num_blocks": 65536, 00:10:53.274 "uuid": "ebb79cb2-4f36-45b2-aca8-8c9bdaa6de54", 00:10:53.274 "assigned_rate_limits": { 00:10:53.274 "rw_ios_per_sec": 0, 00:10:53.274 "rw_mbytes_per_sec": 0, 00:10:53.274 "r_mbytes_per_sec": 0, 00:10:53.274 "w_mbytes_per_sec": 0 00:10:53.274 }, 00:10:53.274 "claimed": false, 00:10:53.274 "zoned": false, 00:10:53.274 "supported_io_types": { 00:10:53.274 "read": true, 00:10:53.274 "write": true, 00:10:53.274 "unmap": true, 00:10:53.274 "flush": true, 00:10:53.275 "reset": true, 00:10:53.275 "nvme_admin": false, 00:10:53.275 "nvme_io": false, 00:10:53.275 "nvme_io_md": false, 00:10:53.275 "write_zeroes": true, 00:10:53.275 "zcopy": true, 00:10:53.275 "get_zone_info": false, 00:10:53.275 "zone_management": false, 00:10:53.275 "zone_append": false, 00:10:53.275 "compare": false, 00:10:53.275 "compare_and_write": false, 00:10:53.275 "abort": true, 00:10:53.275 "seek_hole": false, 00:10:53.275 "seek_data": false, 00:10:53.275 "copy": true, 00:10:53.275 "nvme_iov_md": false 00:10:53.275 }, 00:10:53.275 "memory_domains": [ 00:10:53.275 { 00:10:53.275 "dma_device_id": "system", 00:10:53.275 "dma_device_type": 1 00:10:53.275 }, 00:10:53.275 { 00:10:53.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.275 "dma_device_type": 2 00:10:53.275 } 00:10:53.275 ], 00:10:53.275 "driver_specific": {} 00:10:53.275 } 00:10:53.275 ] 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.275 BaseBdev4 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.275 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.275 [ 00:10:53.275 { 00:10:53.275 "name": "BaseBdev4", 00:10:53.275 "aliases": [ 00:10:53.275 "d04c6581-552a-4600-ad8f-ea10f49bb170" 00:10:53.275 ], 00:10:53.275 "product_name": "Malloc disk", 00:10:53.275 "block_size": 512, 00:10:53.275 "num_blocks": 65536, 00:10:53.275 "uuid": "d04c6581-552a-4600-ad8f-ea10f49bb170", 00:10:53.275 "assigned_rate_limits": { 00:10:53.275 "rw_ios_per_sec": 0, 00:10:53.275 "rw_mbytes_per_sec": 0, 00:10:53.275 "r_mbytes_per_sec": 0, 00:10:53.275 "w_mbytes_per_sec": 0 00:10:53.275 }, 00:10:53.275 "claimed": false, 00:10:53.275 "zoned": false, 00:10:53.275 "supported_io_types": { 00:10:53.275 "read": true, 00:10:53.275 "write": true, 00:10:53.275 "unmap": true, 00:10:53.275 "flush": true, 00:10:53.275 "reset": true, 00:10:53.275 "nvme_admin": false, 00:10:53.275 "nvme_io": false, 00:10:53.535 "nvme_io_md": false, 00:10:53.535 "write_zeroes": true, 00:10:53.535 "zcopy": true, 00:10:53.535 "get_zone_info": false, 00:10:53.535 "zone_management": false, 00:10:53.535 "zone_append": false, 00:10:53.535 "compare": false, 00:10:53.535 "compare_and_write": false, 00:10:53.535 "abort": true, 00:10:53.535 "seek_hole": false, 00:10:53.535 "seek_data": false, 00:10:53.535 "copy": true, 00:10:53.535 "nvme_iov_md": false 00:10:53.535 }, 00:10:53.535 "memory_domains": [ 00:10:53.535 { 00:10:53.535 "dma_device_id": "system", 00:10:53.535 "dma_device_type": 1 00:10:53.535 }, 00:10:53.535 { 00:10:53.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.535 "dma_device_type": 2 00:10:53.535 } 00:10:53.535 ], 00:10:53.535 "driver_specific": {} 00:10:53.535 } 00:10:53.535 ] 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.535 [2024-12-07 16:36:52.183753] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.535 [2024-12-07 16:36:52.183878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.535 [2024-12-07 16:36:52.183920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.535 [2024-12-07 16:36:52.186097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.535 [2024-12-07 16:36:52.186183] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.535 "name": "Existed_Raid", 00:10:53.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.535 "strip_size_kb": 0, 00:10:53.535 "state": "configuring", 00:10:53.535 "raid_level": "raid1", 00:10:53.535 "superblock": false, 00:10:53.535 "num_base_bdevs": 4, 00:10:53.535 "num_base_bdevs_discovered": 3, 00:10:53.535 "num_base_bdevs_operational": 4, 00:10:53.535 "base_bdevs_list": [ 00:10:53.535 { 00:10:53.535 "name": "BaseBdev1", 00:10:53.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.535 "is_configured": false, 00:10:53.535 "data_offset": 0, 00:10:53.535 "data_size": 0 00:10:53.535 }, 00:10:53.535 { 00:10:53.535 "name": "BaseBdev2", 00:10:53.535 "uuid": "6ee36cbc-08b6-4531-99f6-4c425a130f5c", 00:10:53.535 "is_configured": true, 00:10:53.535 "data_offset": 0, 00:10:53.535 "data_size": 65536 00:10:53.535 }, 00:10:53.535 { 00:10:53.535 "name": "BaseBdev3", 00:10:53.535 "uuid": "ebb79cb2-4f36-45b2-aca8-8c9bdaa6de54", 00:10:53.535 "is_configured": true, 00:10:53.535 "data_offset": 0, 00:10:53.535 "data_size": 65536 00:10:53.535 }, 00:10:53.535 { 00:10:53.535 "name": "BaseBdev4", 00:10:53.535 "uuid": "d04c6581-552a-4600-ad8f-ea10f49bb170", 00:10:53.535 "is_configured": true, 00:10:53.535 "data_offset": 0, 00:10:53.535 "data_size": 65536 00:10:53.535 } 00:10:53.535 ] 00:10:53.535 }' 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.535 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.795 [2024-12-07 16:36:52.603133] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.795 "name": "Existed_Raid", 00:10:53.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.795 "strip_size_kb": 0, 00:10:53.795 "state": "configuring", 00:10:53.795 "raid_level": "raid1", 00:10:53.795 "superblock": false, 00:10:53.795 "num_base_bdevs": 4, 00:10:53.795 "num_base_bdevs_discovered": 2, 00:10:53.795 "num_base_bdevs_operational": 4, 00:10:53.795 "base_bdevs_list": [ 00:10:53.795 { 00:10:53.795 "name": "BaseBdev1", 00:10:53.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.795 "is_configured": false, 00:10:53.795 "data_offset": 0, 00:10:53.795 "data_size": 0 00:10:53.795 }, 00:10:53.795 { 00:10:53.795 "name": null, 00:10:53.795 "uuid": "6ee36cbc-08b6-4531-99f6-4c425a130f5c", 00:10:53.795 "is_configured": false, 00:10:53.795 "data_offset": 0, 00:10:53.795 "data_size": 65536 00:10:53.795 }, 00:10:53.795 { 00:10:53.795 "name": "BaseBdev3", 00:10:53.795 "uuid": "ebb79cb2-4f36-45b2-aca8-8c9bdaa6de54", 00:10:53.795 "is_configured": true, 00:10:53.795 "data_offset": 0, 00:10:53.795 "data_size": 65536 00:10:53.795 }, 00:10:53.795 { 00:10:53.795 "name": "BaseBdev4", 00:10:53.795 "uuid": "d04c6581-552a-4600-ad8f-ea10f49bb170", 00:10:53.795 "is_configured": true, 00:10:53.795 "data_offset": 0, 00:10:53.795 "data_size": 65536 00:10:53.795 } 00:10:53.795 ] 00:10:53.795 }' 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.795 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.366 [2024-12-07 16:36:53.075180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.366 BaseBdev1 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.366 [ 00:10:54.366 { 00:10:54.366 "name": "BaseBdev1", 00:10:54.366 "aliases": [ 00:10:54.366 "5480ca10-84bb-4c05-b40b-06735d1760b9" 00:10:54.366 ], 00:10:54.366 "product_name": "Malloc disk", 00:10:54.366 "block_size": 512, 00:10:54.366 "num_blocks": 65536, 00:10:54.366 "uuid": "5480ca10-84bb-4c05-b40b-06735d1760b9", 00:10:54.366 "assigned_rate_limits": { 00:10:54.366 "rw_ios_per_sec": 0, 00:10:54.366 "rw_mbytes_per_sec": 0, 00:10:54.366 "r_mbytes_per_sec": 0, 00:10:54.366 "w_mbytes_per_sec": 0 00:10:54.366 }, 00:10:54.366 "claimed": true, 00:10:54.366 "claim_type": "exclusive_write", 00:10:54.366 "zoned": false, 00:10:54.366 "supported_io_types": { 00:10:54.366 "read": true, 00:10:54.366 "write": true, 00:10:54.366 "unmap": true, 00:10:54.366 "flush": true, 00:10:54.366 "reset": true, 00:10:54.366 "nvme_admin": false, 00:10:54.366 "nvme_io": false, 00:10:54.366 "nvme_io_md": false, 00:10:54.366 "write_zeroes": true, 00:10:54.366 "zcopy": true, 00:10:54.366 "get_zone_info": false, 00:10:54.366 "zone_management": false, 00:10:54.366 "zone_append": false, 00:10:54.366 "compare": false, 00:10:54.366 "compare_and_write": false, 00:10:54.366 "abort": true, 00:10:54.366 "seek_hole": false, 00:10:54.366 "seek_data": false, 00:10:54.366 "copy": true, 00:10:54.366 "nvme_iov_md": false 00:10:54.366 }, 00:10:54.366 "memory_domains": [ 00:10:54.366 { 00:10:54.366 "dma_device_id": "system", 00:10:54.366 "dma_device_type": 1 00:10:54.366 }, 00:10:54.366 { 00:10:54.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.366 "dma_device_type": 2 00:10:54.366 } 00:10:54.366 ], 00:10:54.366 "driver_specific": {} 00:10:54.366 } 00:10:54.366 ] 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.366 "name": "Existed_Raid", 00:10:54.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.366 "strip_size_kb": 0, 00:10:54.366 "state": "configuring", 00:10:54.366 "raid_level": "raid1", 00:10:54.366 "superblock": false, 00:10:54.366 "num_base_bdevs": 4, 00:10:54.366 "num_base_bdevs_discovered": 3, 00:10:54.366 "num_base_bdevs_operational": 4, 00:10:54.366 "base_bdevs_list": [ 00:10:54.366 { 00:10:54.366 "name": "BaseBdev1", 00:10:54.366 "uuid": "5480ca10-84bb-4c05-b40b-06735d1760b9", 00:10:54.366 "is_configured": true, 00:10:54.366 "data_offset": 0, 00:10:54.366 "data_size": 65536 00:10:54.366 }, 00:10:54.366 { 00:10:54.366 "name": null, 00:10:54.366 "uuid": "6ee36cbc-08b6-4531-99f6-4c425a130f5c", 00:10:54.366 "is_configured": false, 00:10:54.366 "data_offset": 0, 00:10:54.366 "data_size": 65536 00:10:54.366 }, 00:10:54.366 { 00:10:54.366 "name": "BaseBdev3", 00:10:54.366 "uuid": "ebb79cb2-4f36-45b2-aca8-8c9bdaa6de54", 00:10:54.366 "is_configured": true, 00:10:54.366 "data_offset": 0, 00:10:54.366 "data_size": 65536 00:10:54.366 }, 00:10:54.366 { 00:10:54.366 "name": "BaseBdev4", 00:10:54.366 "uuid": "d04c6581-552a-4600-ad8f-ea10f49bb170", 00:10:54.366 "is_configured": true, 00:10:54.366 "data_offset": 0, 00:10:54.366 "data_size": 65536 00:10:54.366 } 00:10:54.366 ] 00:10:54.366 }' 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.366 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.936 [2024-12-07 16:36:53.622299] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.936 "name": "Existed_Raid", 00:10:54.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.936 "strip_size_kb": 0, 00:10:54.936 "state": "configuring", 00:10:54.936 "raid_level": "raid1", 00:10:54.936 "superblock": false, 00:10:54.936 "num_base_bdevs": 4, 00:10:54.936 "num_base_bdevs_discovered": 2, 00:10:54.936 "num_base_bdevs_operational": 4, 00:10:54.936 "base_bdevs_list": [ 00:10:54.936 { 00:10:54.936 "name": "BaseBdev1", 00:10:54.936 "uuid": "5480ca10-84bb-4c05-b40b-06735d1760b9", 00:10:54.936 "is_configured": true, 00:10:54.936 "data_offset": 0, 00:10:54.936 "data_size": 65536 00:10:54.936 }, 00:10:54.936 { 00:10:54.936 "name": null, 00:10:54.936 "uuid": "6ee36cbc-08b6-4531-99f6-4c425a130f5c", 00:10:54.936 "is_configured": false, 00:10:54.936 "data_offset": 0, 00:10:54.936 "data_size": 65536 00:10:54.936 }, 00:10:54.936 { 00:10:54.936 "name": null, 00:10:54.936 "uuid": "ebb79cb2-4f36-45b2-aca8-8c9bdaa6de54", 00:10:54.936 "is_configured": false, 00:10:54.936 "data_offset": 0, 00:10:54.936 "data_size": 65536 00:10:54.936 }, 00:10:54.936 { 00:10:54.936 "name": "BaseBdev4", 00:10:54.936 "uuid": "d04c6581-552a-4600-ad8f-ea10f49bb170", 00:10:54.936 "is_configured": true, 00:10:54.936 "data_offset": 0, 00:10:54.936 "data_size": 65536 00:10:54.936 } 00:10:54.936 ] 00:10:54.936 }' 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.936 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.505 [2024-12-07 16:36:54.145482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.505 "name": "Existed_Raid", 00:10:55.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.505 "strip_size_kb": 0, 00:10:55.505 "state": "configuring", 00:10:55.505 "raid_level": "raid1", 00:10:55.505 "superblock": false, 00:10:55.505 "num_base_bdevs": 4, 00:10:55.505 "num_base_bdevs_discovered": 3, 00:10:55.505 "num_base_bdevs_operational": 4, 00:10:55.505 "base_bdevs_list": [ 00:10:55.505 { 00:10:55.505 "name": "BaseBdev1", 00:10:55.505 "uuid": "5480ca10-84bb-4c05-b40b-06735d1760b9", 00:10:55.505 "is_configured": true, 00:10:55.505 "data_offset": 0, 00:10:55.505 "data_size": 65536 00:10:55.505 }, 00:10:55.505 { 00:10:55.505 "name": null, 00:10:55.505 "uuid": "6ee36cbc-08b6-4531-99f6-4c425a130f5c", 00:10:55.505 "is_configured": false, 00:10:55.505 "data_offset": 0, 00:10:55.505 "data_size": 65536 00:10:55.505 }, 00:10:55.505 { 00:10:55.505 "name": "BaseBdev3", 00:10:55.505 "uuid": "ebb79cb2-4f36-45b2-aca8-8c9bdaa6de54", 00:10:55.505 "is_configured": true, 00:10:55.505 "data_offset": 0, 00:10:55.505 "data_size": 65536 00:10:55.505 }, 00:10:55.505 { 00:10:55.505 "name": "BaseBdev4", 00:10:55.505 "uuid": "d04c6581-552a-4600-ad8f-ea10f49bb170", 00:10:55.505 "is_configured": true, 00:10:55.505 "data_offset": 0, 00:10:55.505 "data_size": 65536 00:10:55.505 } 00:10:55.505 ] 00:10:55.505 }' 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.505 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.765 [2024-12-07 16:36:54.612654] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.765 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.031 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.031 "name": "Existed_Raid", 00:10:56.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.031 "strip_size_kb": 0, 00:10:56.031 "state": "configuring", 00:10:56.031 "raid_level": "raid1", 00:10:56.031 "superblock": false, 00:10:56.031 "num_base_bdevs": 4, 00:10:56.031 "num_base_bdevs_discovered": 2, 00:10:56.031 "num_base_bdevs_operational": 4, 00:10:56.031 "base_bdevs_list": [ 00:10:56.031 { 00:10:56.031 "name": null, 00:10:56.031 "uuid": "5480ca10-84bb-4c05-b40b-06735d1760b9", 00:10:56.031 "is_configured": false, 00:10:56.031 "data_offset": 0, 00:10:56.031 "data_size": 65536 00:10:56.031 }, 00:10:56.031 { 00:10:56.031 "name": null, 00:10:56.031 "uuid": "6ee36cbc-08b6-4531-99f6-4c425a130f5c", 00:10:56.031 "is_configured": false, 00:10:56.031 "data_offset": 0, 00:10:56.031 "data_size": 65536 00:10:56.031 }, 00:10:56.031 { 00:10:56.031 "name": "BaseBdev3", 00:10:56.031 "uuid": "ebb79cb2-4f36-45b2-aca8-8c9bdaa6de54", 00:10:56.031 "is_configured": true, 00:10:56.031 "data_offset": 0, 00:10:56.031 "data_size": 65536 00:10:56.031 }, 00:10:56.031 { 00:10:56.031 "name": "BaseBdev4", 00:10:56.031 "uuid": "d04c6581-552a-4600-ad8f-ea10f49bb170", 00:10:56.031 "is_configured": true, 00:10:56.031 "data_offset": 0, 00:10:56.031 "data_size": 65536 00:10:56.031 } 00:10:56.031 ] 00:10:56.031 }' 00:10:56.031 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.031 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.301 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.301 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.301 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.301 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.301 [2024-12-07 16:36:55.043621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.301 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.301 "name": "Existed_Raid", 00:10:56.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.301 "strip_size_kb": 0, 00:10:56.301 "state": "configuring", 00:10:56.301 "raid_level": "raid1", 00:10:56.301 "superblock": false, 00:10:56.301 "num_base_bdevs": 4, 00:10:56.301 "num_base_bdevs_discovered": 3, 00:10:56.301 "num_base_bdevs_operational": 4, 00:10:56.301 "base_bdevs_list": [ 00:10:56.301 { 00:10:56.301 "name": null, 00:10:56.302 "uuid": "5480ca10-84bb-4c05-b40b-06735d1760b9", 00:10:56.302 "is_configured": false, 00:10:56.302 "data_offset": 0, 00:10:56.302 "data_size": 65536 00:10:56.302 }, 00:10:56.302 { 00:10:56.302 "name": "BaseBdev2", 00:10:56.302 "uuid": "6ee36cbc-08b6-4531-99f6-4c425a130f5c", 00:10:56.302 "is_configured": true, 00:10:56.302 "data_offset": 0, 00:10:56.302 "data_size": 65536 00:10:56.302 }, 00:10:56.302 { 00:10:56.302 "name": "BaseBdev3", 00:10:56.302 "uuid": "ebb79cb2-4f36-45b2-aca8-8c9bdaa6de54", 00:10:56.302 "is_configured": true, 00:10:56.302 "data_offset": 0, 00:10:56.302 "data_size": 65536 00:10:56.302 }, 00:10:56.302 { 00:10:56.302 "name": "BaseBdev4", 00:10:56.302 "uuid": "d04c6581-552a-4600-ad8f-ea10f49bb170", 00:10:56.302 "is_configured": true, 00:10:56.302 "data_offset": 0, 00:10:56.302 "data_size": 65536 00:10:56.302 } 00:10:56.302 ] 00:10:56.302 }' 00:10:56.302 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.302 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.871 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:56.871 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.871 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.871 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.871 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.871 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:56.871 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.871 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.871 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:56.871 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5480ca10-84bb-4c05-b40b-06735d1760b9 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.872 [2024-12-07 16:36:55.580083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:56.872 [2024-12-07 16:36:55.580161] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:56.872 [2024-12-07 16:36:55.580173] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:56.872 [2024-12-07 16:36:55.580475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:56.872 [2024-12-07 16:36:55.580640] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:56.872 [2024-12-07 16:36:55.580649] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:56.872 [2024-12-07 16:36:55.580869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.872 NewBaseBdev 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.872 [ 00:10:56.872 { 00:10:56.872 "name": "NewBaseBdev", 00:10:56.872 "aliases": [ 00:10:56.872 "5480ca10-84bb-4c05-b40b-06735d1760b9" 00:10:56.872 ], 00:10:56.872 "product_name": "Malloc disk", 00:10:56.872 "block_size": 512, 00:10:56.872 "num_blocks": 65536, 00:10:56.872 "uuid": "5480ca10-84bb-4c05-b40b-06735d1760b9", 00:10:56.872 "assigned_rate_limits": { 00:10:56.872 "rw_ios_per_sec": 0, 00:10:56.872 "rw_mbytes_per_sec": 0, 00:10:56.872 "r_mbytes_per_sec": 0, 00:10:56.872 "w_mbytes_per_sec": 0 00:10:56.872 }, 00:10:56.872 "claimed": true, 00:10:56.872 "claim_type": "exclusive_write", 00:10:56.872 "zoned": false, 00:10:56.872 "supported_io_types": { 00:10:56.872 "read": true, 00:10:56.872 "write": true, 00:10:56.872 "unmap": true, 00:10:56.872 "flush": true, 00:10:56.872 "reset": true, 00:10:56.872 "nvme_admin": false, 00:10:56.872 "nvme_io": false, 00:10:56.872 "nvme_io_md": false, 00:10:56.872 "write_zeroes": true, 00:10:56.872 "zcopy": true, 00:10:56.872 "get_zone_info": false, 00:10:56.872 "zone_management": false, 00:10:56.872 "zone_append": false, 00:10:56.872 "compare": false, 00:10:56.872 "compare_and_write": false, 00:10:56.872 "abort": true, 00:10:56.872 "seek_hole": false, 00:10:56.872 "seek_data": false, 00:10:56.872 "copy": true, 00:10:56.872 "nvme_iov_md": false 00:10:56.872 }, 00:10:56.872 "memory_domains": [ 00:10:56.872 { 00:10:56.872 "dma_device_id": "system", 00:10:56.872 "dma_device_type": 1 00:10:56.872 }, 00:10:56.872 { 00:10:56.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.872 "dma_device_type": 2 00:10:56.872 } 00:10:56.872 ], 00:10:56.872 "driver_specific": {} 00:10:56.872 } 00:10:56.872 ] 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.872 "name": "Existed_Raid", 00:10:56.872 "uuid": "2034d8a1-3741-41c5-a239-c1d282d2c814", 00:10:56.872 "strip_size_kb": 0, 00:10:56.872 "state": "online", 00:10:56.872 "raid_level": "raid1", 00:10:56.872 "superblock": false, 00:10:56.872 "num_base_bdevs": 4, 00:10:56.872 "num_base_bdevs_discovered": 4, 00:10:56.872 "num_base_bdevs_operational": 4, 00:10:56.872 "base_bdevs_list": [ 00:10:56.872 { 00:10:56.872 "name": "NewBaseBdev", 00:10:56.872 "uuid": "5480ca10-84bb-4c05-b40b-06735d1760b9", 00:10:56.872 "is_configured": true, 00:10:56.872 "data_offset": 0, 00:10:56.872 "data_size": 65536 00:10:56.872 }, 00:10:56.872 { 00:10:56.872 "name": "BaseBdev2", 00:10:56.872 "uuid": "6ee36cbc-08b6-4531-99f6-4c425a130f5c", 00:10:56.872 "is_configured": true, 00:10:56.872 "data_offset": 0, 00:10:56.872 "data_size": 65536 00:10:56.872 }, 00:10:56.872 { 00:10:56.872 "name": "BaseBdev3", 00:10:56.872 "uuid": "ebb79cb2-4f36-45b2-aca8-8c9bdaa6de54", 00:10:56.872 "is_configured": true, 00:10:56.872 "data_offset": 0, 00:10:56.872 "data_size": 65536 00:10:56.872 }, 00:10:56.872 { 00:10:56.872 "name": "BaseBdev4", 00:10:56.872 "uuid": "d04c6581-552a-4600-ad8f-ea10f49bb170", 00:10:56.872 "is_configured": true, 00:10:56.872 "data_offset": 0, 00:10:56.872 "data_size": 65536 00:10:56.872 } 00:10:56.872 ] 00:10:56.872 }' 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.872 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.442 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:57.442 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:57.442 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.442 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.442 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.442 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.442 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:57.442 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.442 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.442 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.442 [2024-12-07 16:36:56.103606] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.442 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.442 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.442 "name": "Existed_Raid", 00:10:57.442 "aliases": [ 00:10:57.442 "2034d8a1-3741-41c5-a239-c1d282d2c814" 00:10:57.442 ], 00:10:57.442 "product_name": "Raid Volume", 00:10:57.442 "block_size": 512, 00:10:57.442 "num_blocks": 65536, 00:10:57.442 "uuid": "2034d8a1-3741-41c5-a239-c1d282d2c814", 00:10:57.442 "assigned_rate_limits": { 00:10:57.442 "rw_ios_per_sec": 0, 00:10:57.442 "rw_mbytes_per_sec": 0, 00:10:57.442 "r_mbytes_per_sec": 0, 00:10:57.442 "w_mbytes_per_sec": 0 00:10:57.442 }, 00:10:57.442 "claimed": false, 00:10:57.442 "zoned": false, 00:10:57.442 "supported_io_types": { 00:10:57.442 "read": true, 00:10:57.442 "write": true, 00:10:57.442 "unmap": false, 00:10:57.442 "flush": false, 00:10:57.442 "reset": true, 00:10:57.442 "nvme_admin": false, 00:10:57.442 "nvme_io": false, 00:10:57.442 "nvme_io_md": false, 00:10:57.442 "write_zeroes": true, 00:10:57.443 "zcopy": false, 00:10:57.443 "get_zone_info": false, 00:10:57.443 "zone_management": false, 00:10:57.443 "zone_append": false, 00:10:57.443 "compare": false, 00:10:57.443 "compare_and_write": false, 00:10:57.443 "abort": false, 00:10:57.443 "seek_hole": false, 00:10:57.443 "seek_data": false, 00:10:57.443 "copy": false, 00:10:57.443 "nvme_iov_md": false 00:10:57.443 }, 00:10:57.443 "memory_domains": [ 00:10:57.443 { 00:10:57.443 "dma_device_id": "system", 00:10:57.443 "dma_device_type": 1 00:10:57.443 }, 00:10:57.443 { 00:10:57.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.443 "dma_device_type": 2 00:10:57.443 }, 00:10:57.443 { 00:10:57.443 "dma_device_id": "system", 00:10:57.443 "dma_device_type": 1 00:10:57.443 }, 00:10:57.443 { 00:10:57.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.443 "dma_device_type": 2 00:10:57.443 }, 00:10:57.443 { 00:10:57.443 "dma_device_id": "system", 00:10:57.443 "dma_device_type": 1 00:10:57.443 }, 00:10:57.443 { 00:10:57.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.443 "dma_device_type": 2 00:10:57.443 }, 00:10:57.443 { 00:10:57.443 "dma_device_id": "system", 00:10:57.443 "dma_device_type": 1 00:10:57.443 }, 00:10:57.443 { 00:10:57.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.443 "dma_device_type": 2 00:10:57.443 } 00:10:57.443 ], 00:10:57.443 "driver_specific": { 00:10:57.443 "raid": { 00:10:57.443 "uuid": "2034d8a1-3741-41c5-a239-c1d282d2c814", 00:10:57.443 "strip_size_kb": 0, 00:10:57.443 "state": "online", 00:10:57.443 "raid_level": "raid1", 00:10:57.443 "superblock": false, 00:10:57.443 "num_base_bdevs": 4, 00:10:57.443 "num_base_bdevs_discovered": 4, 00:10:57.443 "num_base_bdevs_operational": 4, 00:10:57.443 "base_bdevs_list": [ 00:10:57.443 { 00:10:57.443 "name": "NewBaseBdev", 00:10:57.443 "uuid": "5480ca10-84bb-4c05-b40b-06735d1760b9", 00:10:57.443 "is_configured": true, 00:10:57.443 "data_offset": 0, 00:10:57.443 "data_size": 65536 00:10:57.443 }, 00:10:57.443 { 00:10:57.443 "name": "BaseBdev2", 00:10:57.443 "uuid": "6ee36cbc-08b6-4531-99f6-4c425a130f5c", 00:10:57.443 "is_configured": true, 00:10:57.443 "data_offset": 0, 00:10:57.443 "data_size": 65536 00:10:57.443 }, 00:10:57.443 { 00:10:57.443 "name": "BaseBdev3", 00:10:57.443 "uuid": "ebb79cb2-4f36-45b2-aca8-8c9bdaa6de54", 00:10:57.443 "is_configured": true, 00:10:57.443 "data_offset": 0, 00:10:57.443 "data_size": 65536 00:10:57.443 }, 00:10:57.443 { 00:10:57.443 "name": "BaseBdev4", 00:10:57.443 "uuid": "d04c6581-552a-4600-ad8f-ea10f49bb170", 00:10:57.443 "is_configured": true, 00:10:57.443 "data_offset": 0, 00:10:57.443 "data_size": 65536 00:10:57.443 } 00:10:57.443 ] 00:10:57.443 } 00:10:57.443 } 00:10:57.443 }' 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:57.443 BaseBdev2 00:10:57.443 BaseBdev3 00:10:57.443 BaseBdev4' 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.443 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.704 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.704 [2024-12-07 16:36:56.446716] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.704 [2024-12-07 16:36:56.446810] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.704 [2024-12-07 16:36:56.446927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.704 [2024-12-07 16:36:56.447224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.705 [2024-12-07 16:36:56.447246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:57.705 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.705 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84295 00:10:57.705 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 84295 ']' 00:10:57.705 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 84295 00:10:57.705 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:57.705 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.705 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84295 00:10:57.705 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:57.705 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:57.705 killing process with pid 84295 00:10:57.705 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84295' 00:10:57.705 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 84295 00:10:57.705 [2024-12-07 16:36:56.487134] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.705 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 84295 00:10:57.705 [2024-12-07 16:36:56.564705] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.274 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:58.274 ************************************ 00:10:58.274 END TEST raid_state_function_test 00:10:58.274 ************************************ 00:10:58.274 00:10:58.274 real 0m9.958s 00:10:58.274 user 0m16.605s 00:10:58.274 sys 0m2.239s 00:10:58.274 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.274 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.274 16:36:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:58.274 16:36:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:58.274 16:36:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.274 16:36:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.274 ************************************ 00:10:58.274 START TEST raid_state_function_test_sb 00:10:58.274 ************************************ 00:10:58.274 16:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:10:58.274 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:58.274 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:58.274 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:58.274 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:58.274 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84950 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84950' 00:10:58.275 Process raid pid: 84950 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84950 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84950 ']' 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.275 16:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.275 [2024-12-07 16:36:57.126696] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:58.275 [2024-12-07 16:36:57.126951] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.534 [2024-12-07 16:36:57.293928] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.534 [2024-12-07 16:36:57.371526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.793 [2024-12-07 16:36:57.452205] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.793 [2024-12-07 16:36:57.452370] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.360 [2024-12-07 16:36:57.990042] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.360 [2024-12-07 16:36:57.990116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.360 [2024-12-07 16:36:57.990130] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.360 [2024-12-07 16:36:57.990141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.360 [2024-12-07 16:36:57.990150] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:59.360 [2024-12-07 16:36:57.990164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:59.360 [2024-12-07 16:36:57.990171] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:59.360 [2024-12-07 16:36:57.990180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.360 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.360 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.360 16:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.360 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.360 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.360 "name": "Existed_Raid", 00:10:59.360 "uuid": "8997018c-ad97-4335-b4dd-1a192f6260c1", 00:10:59.360 "strip_size_kb": 0, 00:10:59.360 "state": "configuring", 00:10:59.360 "raid_level": "raid1", 00:10:59.360 "superblock": true, 00:10:59.360 "num_base_bdevs": 4, 00:10:59.360 "num_base_bdevs_discovered": 0, 00:10:59.360 "num_base_bdevs_operational": 4, 00:10:59.360 "base_bdevs_list": [ 00:10:59.360 { 00:10:59.360 "name": "BaseBdev1", 00:10:59.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.360 "is_configured": false, 00:10:59.360 "data_offset": 0, 00:10:59.360 "data_size": 0 00:10:59.360 }, 00:10:59.360 { 00:10:59.360 "name": "BaseBdev2", 00:10:59.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.360 "is_configured": false, 00:10:59.360 "data_offset": 0, 00:10:59.360 "data_size": 0 00:10:59.360 }, 00:10:59.360 { 00:10:59.360 "name": "BaseBdev3", 00:10:59.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.360 "is_configured": false, 00:10:59.360 "data_offset": 0, 00:10:59.360 "data_size": 0 00:10:59.360 }, 00:10:59.360 { 00:10:59.360 "name": "BaseBdev4", 00:10:59.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.360 "is_configured": false, 00:10:59.360 "data_offset": 0, 00:10:59.360 "data_size": 0 00:10:59.360 } 00:10:59.360 ] 00:10:59.360 }' 00:10:59.360 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.360 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.620 [2024-12-07 16:36:58.461148] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.620 [2024-12-07 16:36:58.461276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.620 [2024-12-07 16:36:58.473164] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.620 [2024-12-07 16:36:58.473255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.620 [2024-12-07 16:36:58.473284] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.620 [2024-12-07 16:36:58.473309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.620 [2024-12-07 16:36:58.473328] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:59.620 [2024-12-07 16:36:58.473365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:59.620 [2024-12-07 16:36:58.473386] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:59.620 [2024-12-07 16:36:58.473409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.620 [2024-12-07 16:36:58.501109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.620 BaseBdev1 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.620 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.879 [ 00:10:59.879 { 00:10:59.879 "name": "BaseBdev1", 00:10:59.879 "aliases": [ 00:10:59.879 "8afa9f0b-3300-4a04-8143-3d1cb2497e0d" 00:10:59.879 ], 00:10:59.879 "product_name": "Malloc disk", 00:10:59.879 "block_size": 512, 00:10:59.879 "num_blocks": 65536, 00:10:59.879 "uuid": "8afa9f0b-3300-4a04-8143-3d1cb2497e0d", 00:10:59.879 "assigned_rate_limits": { 00:10:59.879 "rw_ios_per_sec": 0, 00:10:59.879 "rw_mbytes_per_sec": 0, 00:10:59.879 "r_mbytes_per_sec": 0, 00:10:59.879 "w_mbytes_per_sec": 0 00:10:59.879 }, 00:10:59.879 "claimed": true, 00:10:59.879 "claim_type": "exclusive_write", 00:10:59.879 "zoned": false, 00:10:59.879 "supported_io_types": { 00:10:59.879 "read": true, 00:10:59.879 "write": true, 00:10:59.879 "unmap": true, 00:10:59.879 "flush": true, 00:10:59.879 "reset": true, 00:10:59.879 "nvme_admin": false, 00:10:59.879 "nvme_io": false, 00:10:59.879 "nvme_io_md": false, 00:10:59.879 "write_zeroes": true, 00:10:59.879 "zcopy": true, 00:10:59.879 "get_zone_info": false, 00:10:59.879 "zone_management": false, 00:10:59.879 "zone_append": false, 00:10:59.879 "compare": false, 00:10:59.879 "compare_and_write": false, 00:10:59.879 "abort": true, 00:10:59.879 "seek_hole": false, 00:10:59.879 "seek_data": false, 00:10:59.879 "copy": true, 00:10:59.879 "nvme_iov_md": false 00:10:59.879 }, 00:10:59.879 "memory_domains": [ 00:10:59.879 { 00:10:59.879 "dma_device_id": "system", 00:10:59.879 "dma_device_type": 1 00:10:59.879 }, 00:10:59.879 { 00:10:59.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.879 "dma_device_type": 2 00:10:59.879 } 00:10:59.879 ], 00:10:59.879 "driver_specific": {} 00:10:59.879 } 00:10:59.879 ] 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.879 "name": "Existed_Raid", 00:10:59.879 "uuid": "78af4630-84e7-4604-a5f4-4e7a8c9693a1", 00:10:59.879 "strip_size_kb": 0, 00:10:59.879 "state": "configuring", 00:10:59.879 "raid_level": "raid1", 00:10:59.879 "superblock": true, 00:10:59.879 "num_base_bdevs": 4, 00:10:59.879 "num_base_bdevs_discovered": 1, 00:10:59.879 "num_base_bdevs_operational": 4, 00:10:59.879 "base_bdevs_list": [ 00:10:59.879 { 00:10:59.879 "name": "BaseBdev1", 00:10:59.879 "uuid": "8afa9f0b-3300-4a04-8143-3d1cb2497e0d", 00:10:59.879 "is_configured": true, 00:10:59.879 "data_offset": 2048, 00:10:59.879 "data_size": 63488 00:10:59.879 }, 00:10:59.879 { 00:10:59.879 "name": "BaseBdev2", 00:10:59.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.879 "is_configured": false, 00:10:59.879 "data_offset": 0, 00:10:59.879 "data_size": 0 00:10:59.879 }, 00:10:59.879 { 00:10:59.879 "name": "BaseBdev3", 00:10:59.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.879 "is_configured": false, 00:10:59.879 "data_offset": 0, 00:10:59.879 "data_size": 0 00:10:59.879 }, 00:10:59.879 { 00:10:59.879 "name": "BaseBdev4", 00:10:59.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.879 "is_configured": false, 00:10:59.879 "data_offset": 0, 00:10:59.879 "data_size": 0 00:10:59.879 } 00:10:59.879 ] 00:10:59.879 }' 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.879 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.138 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:00.138 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.138 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.138 [2024-12-07 16:36:58.996359] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.138 [2024-12-07 16:36:58.996446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:00.138 16:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.138 16:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.138 [2024-12-07 16:36:59.008387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.138 [2024-12-07 16:36:59.010723] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.138 [2024-12-07 16:36:59.010772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.138 [2024-12-07 16:36:59.010782] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:00.138 [2024-12-07 16:36:59.010792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:00.138 [2024-12-07 16:36:59.010799] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:00.138 [2024-12-07 16:36:59.010808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.138 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.397 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.397 "name": "Existed_Raid", 00:11:00.397 "uuid": "7537900b-648a-496c-bd65-d43f3a2807f4", 00:11:00.397 "strip_size_kb": 0, 00:11:00.397 "state": "configuring", 00:11:00.397 "raid_level": "raid1", 00:11:00.397 "superblock": true, 00:11:00.397 "num_base_bdevs": 4, 00:11:00.397 "num_base_bdevs_discovered": 1, 00:11:00.397 "num_base_bdevs_operational": 4, 00:11:00.397 "base_bdevs_list": [ 00:11:00.397 { 00:11:00.397 "name": "BaseBdev1", 00:11:00.397 "uuid": "8afa9f0b-3300-4a04-8143-3d1cb2497e0d", 00:11:00.397 "is_configured": true, 00:11:00.397 "data_offset": 2048, 00:11:00.397 "data_size": 63488 00:11:00.397 }, 00:11:00.397 { 00:11:00.397 "name": "BaseBdev2", 00:11:00.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.397 "is_configured": false, 00:11:00.397 "data_offset": 0, 00:11:00.397 "data_size": 0 00:11:00.397 }, 00:11:00.397 { 00:11:00.397 "name": "BaseBdev3", 00:11:00.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.397 "is_configured": false, 00:11:00.397 "data_offset": 0, 00:11:00.397 "data_size": 0 00:11:00.397 }, 00:11:00.397 { 00:11:00.397 "name": "BaseBdev4", 00:11:00.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.397 "is_configured": false, 00:11:00.397 "data_offset": 0, 00:11:00.397 "data_size": 0 00:11:00.397 } 00:11:00.397 ] 00:11:00.397 }' 00:11:00.397 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.397 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.657 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.658 [2024-12-07 16:36:59.454892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.658 BaseBdev2 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.658 [ 00:11:00.658 { 00:11:00.658 "name": "BaseBdev2", 00:11:00.658 "aliases": [ 00:11:00.658 "b250624f-845c-4404-bfd8-8d8f0886d9e6" 00:11:00.658 ], 00:11:00.658 "product_name": "Malloc disk", 00:11:00.658 "block_size": 512, 00:11:00.658 "num_blocks": 65536, 00:11:00.658 "uuid": "b250624f-845c-4404-bfd8-8d8f0886d9e6", 00:11:00.658 "assigned_rate_limits": { 00:11:00.658 "rw_ios_per_sec": 0, 00:11:00.658 "rw_mbytes_per_sec": 0, 00:11:00.658 "r_mbytes_per_sec": 0, 00:11:00.658 "w_mbytes_per_sec": 0 00:11:00.658 }, 00:11:00.658 "claimed": true, 00:11:00.658 "claim_type": "exclusive_write", 00:11:00.658 "zoned": false, 00:11:00.658 "supported_io_types": { 00:11:00.658 "read": true, 00:11:00.658 "write": true, 00:11:00.658 "unmap": true, 00:11:00.658 "flush": true, 00:11:00.658 "reset": true, 00:11:00.658 "nvme_admin": false, 00:11:00.658 "nvme_io": false, 00:11:00.658 "nvme_io_md": false, 00:11:00.658 "write_zeroes": true, 00:11:00.658 "zcopy": true, 00:11:00.658 "get_zone_info": false, 00:11:00.658 "zone_management": false, 00:11:00.658 "zone_append": false, 00:11:00.658 "compare": false, 00:11:00.658 "compare_and_write": false, 00:11:00.658 "abort": true, 00:11:00.658 "seek_hole": false, 00:11:00.658 "seek_data": false, 00:11:00.658 "copy": true, 00:11:00.658 "nvme_iov_md": false 00:11:00.658 }, 00:11:00.658 "memory_domains": [ 00:11:00.658 { 00:11:00.658 "dma_device_id": "system", 00:11:00.658 "dma_device_type": 1 00:11:00.658 }, 00:11:00.658 { 00:11:00.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.658 "dma_device_type": 2 00:11:00.658 } 00:11:00.658 ], 00:11:00.658 "driver_specific": {} 00:11:00.658 } 00:11:00.658 ] 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.658 "name": "Existed_Raid", 00:11:00.658 "uuid": "7537900b-648a-496c-bd65-d43f3a2807f4", 00:11:00.658 "strip_size_kb": 0, 00:11:00.658 "state": "configuring", 00:11:00.658 "raid_level": "raid1", 00:11:00.658 "superblock": true, 00:11:00.658 "num_base_bdevs": 4, 00:11:00.658 "num_base_bdevs_discovered": 2, 00:11:00.658 "num_base_bdevs_operational": 4, 00:11:00.658 "base_bdevs_list": [ 00:11:00.658 { 00:11:00.658 "name": "BaseBdev1", 00:11:00.658 "uuid": "8afa9f0b-3300-4a04-8143-3d1cb2497e0d", 00:11:00.658 "is_configured": true, 00:11:00.658 "data_offset": 2048, 00:11:00.658 "data_size": 63488 00:11:00.658 }, 00:11:00.658 { 00:11:00.658 "name": "BaseBdev2", 00:11:00.658 "uuid": "b250624f-845c-4404-bfd8-8d8f0886d9e6", 00:11:00.658 "is_configured": true, 00:11:00.658 "data_offset": 2048, 00:11:00.658 "data_size": 63488 00:11:00.658 }, 00:11:00.658 { 00:11:00.658 "name": "BaseBdev3", 00:11:00.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.658 "is_configured": false, 00:11:00.658 "data_offset": 0, 00:11:00.658 "data_size": 0 00:11:00.658 }, 00:11:00.658 { 00:11:00.658 "name": "BaseBdev4", 00:11:00.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.658 "is_configured": false, 00:11:00.658 "data_offset": 0, 00:11:00.658 "data_size": 0 00:11:00.658 } 00:11:00.658 ] 00:11:00.658 }' 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.658 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.228 [2024-12-07 16:36:59.939822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.228 BaseBdev3 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.228 [ 00:11:01.228 { 00:11:01.228 "name": "BaseBdev3", 00:11:01.228 "aliases": [ 00:11:01.228 "163fe0d4-8405-433c-b7bb-6fa20a9fafb8" 00:11:01.228 ], 00:11:01.228 "product_name": "Malloc disk", 00:11:01.228 "block_size": 512, 00:11:01.228 "num_blocks": 65536, 00:11:01.228 "uuid": "163fe0d4-8405-433c-b7bb-6fa20a9fafb8", 00:11:01.228 "assigned_rate_limits": { 00:11:01.228 "rw_ios_per_sec": 0, 00:11:01.228 "rw_mbytes_per_sec": 0, 00:11:01.228 "r_mbytes_per_sec": 0, 00:11:01.228 "w_mbytes_per_sec": 0 00:11:01.228 }, 00:11:01.228 "claimed": true, 00:11:01.228 "claim_type": "exclusive_write", 00:11:01.228 "zoned": false, 00:11:01.228 "supported_io_types": { 00:11:01.228 "read": true, 00:11:01.228 "write": true, 00:11:01.228 "unmap": true, 00:11:01.228 "flush": true, 00:11:01.228 "reset": true, 00:11:01.228 "nvme_admin": false, 00:11:01.228 "nvme_io": false, 00:11:01.228 "nvme_io_md": false, 00:11:01.228 "write_zeroes": true, 00:11:01.228 "zcopy": true, 00:11:01.228 "get_zone_info": false, 00:11:01.228 "zone_management": false, 00:11:01.228 "zone_append": false, 00:11:01.228 "compare": false, 00:11:01.228 "compare_and_write": false, 00:11:01.228 "abort": true, 00:11:01.228 "seek_hole": false, 00:11:01.228 "seek_data": false, 00:11:01.228 "copy": true, 00:11:01.228 "nvme_iov_md": false 00:11:01.228 }, 00:11:01.228 "memory_domains": [ 00:11:01.228 { 00:11:01.228 "dma_device_id": "system", 00:11:01.228 "dma_device_type": 1 00:11:01.228 }, 00:11:01.228 { 00:11:01.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.228 "dma_device_type": 2 00:11:01.228 } 00:11:01.228 ], 00:11:01.228 "driver_specific": {} 00:11:01.228 } 00:11:01.228 ] 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.228 16:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.228 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.228 "name": "Existed_Raid", 00:11:01.228 "uuid": "7537900b-648a-496c-bd65-d43f3a2807f4", 00:11:01.228 "strip_size_kb": 0, 00:11:01.228 "state": "configuring", 00:11:01.228 "raid_level": "raid1", 00:11:01.228 "superblock": true, 00:11:01.228 "num_base_bdevs": 4, 00:11:01.228 "num_base_bdevs_discovered": 3, 00:11:01.228 "num_base_bdevs_operational": 4, 00:11:01.228 "base_bdevs_list": [ 00:11:01.228 { 00:11:01.228 "name": "BaseBdev1", 00:11:01.228 "uuid": "8afa9f0b-3300-4a04-8143-3d1cb2497e0d", 00:11:01.228 "is_configured": true, 00:11:01.228 "data_offset": 2048, 00:11:01.228 "data_size": 63488 00:11:01.228 }, 00:11:01.228 { 00:11:01.228 "name": "BaseBdev2", 00:11:01.228 "uuid": "b250624f-845c-4404-bfd8-8d8f0886d9e6", 00:11:01.228 "is_configured": true, 00:11:01.228 "data_offset": 2048, 00:11:01.228 "data_size": 63488 00:11:01.228 }, 00:11:01.228 { 00:11:01.228 "name": "BaseBdev3", 00:11:01.228 "uuid": "163fe0d4-8405-433c-b7bb-6fa20a9fafb8", 00:11:01.228 "is_configured": true, 00:11:01.228 "data_offset": 2048, 00:11:01.228 "data_size": 63488 00:11:01.228 }, 00:11:01.228 { 00:11:01.228 "name": "BaseBdev4", 00:11:01.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.228 "is_configured": false, 00:11:01.228 "data_offset": 0, 00:11:01.228 "data_size": 0 00:11:01.228 } 00:11:01.228 ] 00:11:01.228 }' 00:11:01.229 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.229 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.797 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:01.797 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.797 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.797 [2024-12-07 16:37:00.444804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:01.797 [2024-12-07 16:37:00.445077] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:01.797 [2024-12-07 16:37:00.445097] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:01.797 [2024-12-07 16:37:00.445466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:01.797 [2024-12-07 16:37:00.445641] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:01.797 [2024-12-07 16:37:00.445668] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:01.797 [2024-12-07 16:37:00.445820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.797 BaseBdev4 00:11:01.797 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.797 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.798 [ 00:11:01.798 { 00:11:01.798 "name": "BaseBdev4", 00:11:01.798 "aliases": [ 00:11:01.798 "826a4b97-e107-4035-889c-68070f3f8865" 00:11:01.798 ], 00:11:01.798 "product_name": "Malloc disk", 00:11:01.798 "block_size": 512, 00:11:01.798 "num_blocks": 65536, 00:11:01.798 "uuid": "826a4b97-e107-4035-889c-68070f3f8865", 00:11:01.798 "assigned_rate_limits": { 00:11:01.798 "rw_ios_per_sec": 0, 00:11:01.798 "rw_mbytes_per_sec": 0, 00:11:01.798 "r_mbytes_per_sec": 0, 00:11:01.798 "w_mbytes_per_sec": 0 00:11:01.798 }, 00:11:01.798 "claimed": true, 00:11:01.798 "claim_type": "exclusive_write", 00:11:01.798 "zoned": false, 00:11:01.798 "supported_io_types": { 00:11:01.798 "read": true, 00:11:01.798 "write": true, 00:11:01.798 "unmap": true, 00:11:01.798 "flush": true, 00:11:01.798 "reset": true, 00:11:01.798 "nvme_admin": false, 00:11:01.798 "nvme_io": false, 00:11:01.798 "nvme_io_md": false, 00:11:01.798 "write_zeroes": true, 00:11:01.798 "zcopy": true, 00:11:01.798 "get_zone_info": false, 00:11:01.798 "zone_management": false, 00:11:01.798 "zone_append": false, 00:11:01.798 "compare": false, 00:11:01.798 "compare_and_write": false, 00:11:01.798 "abort": true, 00:11:01.798 "seek_hole": false, 00:11:01.798 "seek_data": false, 00:11:01.798 "copy": true, 00:11:01.798 "nvme_iov_md": false 00:11:01.798 }, 00:11:01.798 "memory_domains": [ 00:11:01.798 { 00:11:01.798 "dma_device_id": "system", 00:11:01.798 "dma_device_type": 1 00:11:01.798 }, 00:11:01.798 { 00:11:01.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.798 "dma_device_type": 2 00:11:01.798 } 00:11:01.798 ], 00:11:01.798 "driver_specific": {} 00:11:01.798 } 00:11:01.798 ] 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.798 "name": "Existed_Raid", 00:11:01.798 "uuid": "7537900b-648a-496c-bd65-d43f3a2807f4", 00:11:01.798 "strip_size_kb": 0, 00:11:01.798 "state": "online", 00:11:01.798 "raid_level": "raid1", 00:11:01.798 "superblock": true, 00:11:01.798 "num_base_bdevs": 4, 00:11:01.798 "num_base_bdevs_discovered": 4, 00:11:01.798 "num_base_bdevs_operational": 4, 00:11:01.798 "base_bdevs_list": [ 00:11:01.798 { 00:11:01.798 "name": "BaseBdev1", 00:11:01.798 "uuid": "8afa9f0b-3300-4a04-8143-3d1cb2497e0d", 00:11:01.798 "is_configured": true, 00:11:01.798 "data_offset": 2048, 00:11:01.798 "data_size": 63488 00:11:01.798 }, 00:11:01.798 { 00:11:01.798 "name": "BaseBdev2", 00:11:01.798 "uuid": "b250624f-845c-4404-bfd8-8d8f0886d9e6", 00:11:01.798 "is_configured": true, 00:11:01.798 "data_offset": 2048, 00:11:01.798 "data_size": 63488 00:11:01.798 }, 00:11:01.798 { 00:11:01.798 "name": "BaseBdev3", 00:11:01.798 "uuid": "163fe0d4-8405-433c-b7bb-6fa20a9fafb8", 00:11:01.798 "is_configured": true, 00:11:01.798 "data_offset": 2048, 00:11:01.798 "data_size": 63488 00:11:01.798 }, 00:11:01.798 { 00:11:01.798 "name": "BaseBdev4", 00:11:01.798 "uuid": "826a4b97-e107-4035-889c-68070f3f8865", 00:11:01.798 "is_configured": true, 00:11:01.798 "data_offset": 2048, 00:11:01.798 "data_size": 63488 00:11:01.798 } 00:11:01.798 ] 00:11:01.798 }' 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.798 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.057 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:02.057 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:02.057 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:02.057 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:02.057 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:02.057 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:02.057 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:02.057 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:02.057 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.057 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.057 [2024-12-07 16:37:00.920487] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.057 16:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.057 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:02.057 "name": "Existed_Raid", 00:11:02.057 "aliases": [ 00:11:02.057 "7537900b-648a-496c-bd65-d43f3a2807f4" 00:11:02.057 ], 00:11:02.057 "product_name": "Raid Volume", 00:11:02.057 "block_size": 512, 00:11:02.057 "num_blocks": 63488, 00:11:02.057 "uuid": "7537900b-648a-496c-bd65-d43f3a2807f4", 00:11:02.057 "assigned_rate_limits": { 00:11:02.057 "rw_ios_per_sec": 0, 00:11:02.057 "rw_mbytes_per_sec": 0, 00:11:02.057 "r_mbytes_per_sec": 0, 00:11:02.057 "w_mbytes_per_sec": 0 00:11:02.057 }, 00:11:02.057 "claimed": false, 00:11:02.057 "zoned": false, 00:11:02.057 "supported_io_types": { 00:11:02.057 "read": true, 00:11:02.057 "write": true, 00:11:02.057 "unmap": false, 00:11:02.057 "flush": false, 00:11:02.057 "reset": true, 00:11:02.057 "nvme_admin": false, 00:11:02.057 "nvme_io": false, 00:11:02.057 "nvme_io_md": false, 00:11:02.057 "write_zeroes": true, 00:11:02.057 "zcopy": false, 00:11:02.057 "get_zone_info": false, 00:11:02.057 "zone_management": false, 00:11:02.057 "zone_append": false, 00:11:02.057 "compare": false, 00:11:02.057 "compare_and_write": false, 00:11:02.057 "abort": false, 00:11:02.057 "seek_hole": false, 00:11:02.057 "seek_data": false, 00:11:02.057 "copy": false, 00:11:02.057 "nvme_iov_md": false 00:11:02.057 }, 00:11:02.057 "memory_domains": [ 00:11:02.057 { 00:11:02.057 "dma_device_id": "system", 00:11:02.057 "dma_device_type": 1 00:11:02.057 }, 00:11:02.057 { 00:11:02.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.057 "dma_device_type": 2 00:11:02.057 }, 00:11:02.057 { 00:11:02.057 "dma_device_id": "system", 00:11:02.057 "dma_device_type": 1 00:11:02.057 }, 00:11:02.057 { 00:11:02.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.057 "dma_device_type": 2 00:11:02.057 }, 00:11:02.057 { 00:11:02.057 "dma_device_id": "system", 00:11:02.058 "dma_device_type": 1 00:11:02.058 }, 00:11:02.058 { 00:11:02.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.058 "dma_device_type": 2 00:11:02.058 }, 00:11:02.058 { 00:11:02.058 "dma_device_id": "system", 00:11:02.058 "dma_device_type": 1 00:11:02.058 }, 00:11:02.058 { 00:11:02.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.058 "dma_device_type": 2 00:11:02.058 } 00:11:02.058 ], 00:11:02.058 "driver_specific": { 00:11:02.058 "raid": { 00:11:02.058 "uuid": "7537900b-648a-496c-bd65-d43f3a2807f4", 00:11:02.058 "strip_size_kb": 0, 00:11:02.058 "state": "online", 00:11:02.058 "raid_level": "raid1", 00:11:02.058 "superblock": true, 00:11:02.058 "num_base_bdevs": 4, 00:11:02.058 "num_base_bdevs_discovered": 4, 00:11:02.058 "num_base_bdevs_operational": 4, 00:11:02.058 "base_bdevs_list": [ 00:11:02.058 { 00:11:02.058 "name": "BaseBdev1", 00:11:02.058 "uuid": "8afa9f0b-3300-4a04-8143-3d1cb2497e0d", 00:11:02.058 "is_configured": true, 00:11:02.058 "data_offset": 2048, 00:11:02.058 "data_size": 63488 00:11:02.058 }, 00:11:02.058 { 00:11:02.058 "name": "BaseBdev2", 00:11:02.058 "uuid": "b250624f-845c-4404-bfd8-8d8f0886d9e6", 00:11:02.058 "is_configured": true, 00:11:02.058 "data_offset": 2048, 00:11:02.058 "data_size": 63488 00:11:02.058 }, 00:11:02.058 { 00:11:02.058 "name": "BaseBdev3", 00:11:02.058 "uuid": "163fe0d4-8405-433c-b7bb-6fa20a9fafb8", 00:11:02.058 "is_configured": true, 00:11:02.058 "data_offset": 2048, 00:11:02.058 "data_size": 63488 00:11:02.058 }, 00:11:02.058 { 00:11:02.058 "name": "BaseBdev4", 00:11:02.058 "uuid": "826a4b97-e107-4035-889c-68070f3f8865", 00:11:02.058 "is_configured": true, 00:11:02.058 "data_offset": 2048, 00:11:02.058 "data_size": 63488 00:11:02.058 } 00:11:02.058 ] 00:11:02.058 } 00:11:02.058 } 00:11:02.058 }' 00:11:02.317 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.317 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:02.317 BaseBdev2 00:11:02.317 BaseBdev3 00:11:02.317 BaseBdev4' 00:11:02.317 16:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.317 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.317 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.317 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.317 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:02.317 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.317 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.317 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.317 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.317 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.317 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.318 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.318 [2024-12-07 16:37:01.211641] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.577 "name": "Existed_Raid", 00:11:02.577 "uuid": "7537900b-648a-496c-bd65-d43f3a2807f4", 00:11:02.577 "strip_size_kb": 0, 00:11:02.577 "state": "online", 00:11:02.577 "raid_level": "raid1", 00:11:02.577 "superblock": true, 00:11:02.577 "num_base_bdevs": 4, 00:11:02.577 "num_base_bdevs_discovered": 3, 00:11:02.577 "num_base_bdevs_operational": 3, 00:11:02.577 "base_bdevs_list": [ 00:11:02.577 { 00:11:02.577 "name": null, 00:11:02.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.577 "is_configured": false, 00:11:02.577 "data_offset": 0, 00:11:02.577 "data_size": 63488 00:11:02.577 }, 00:11:02.577 { 00:11:02.577 "name": "BaseBdev2", 00:11:02.577 "uuid": "b250624f-845c-4404-bfd8-8d8f0886d9e6", 00:11:02.577 "is_configured": true, 00:11:02.577 "data_offset": 2048, 00:11:02.577 "data_size": 63488 00:11:02.577 }, 00:11:02.577 { 00:11:02.577 "name": "BaseBdev3", 00:11:02.577 "uuid": "163fe0d4-8405-433c-b7bb-6fa20a9fafb8", 00:11:02.577 "is_configured": true, 00:11:02.577 "data_offset": 2048, 00:11:02.577 "data_size": 63488 00:11:02.577 }, 00:11:02.577 { 00:11:02.577 "name": "BaseBdev4", 00:11:02.577 "uuid": "826a4b97-e107-4035-889c-68070f3f8865", 00:11:02.577 "is_configured": true, 00:11:02.577 "data_offset": 2048, 00:11:02.577 "data_size": 63488 00:11:02.577 } 00:11:02.577 ] 00:11:02.577 }' 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.577 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.837 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:02.837 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:02.837 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:02.837 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.837 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.837 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.837 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.837 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:02.837 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:02.837 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:02.837 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.837 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.837 [2024-12-07 16:37:01.712507] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.097 [2024-12-07 16:37:01.793830] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.097 [2024-12-07 16:37:01.855037] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:03.097 [2024-12-07 16:37:01.855170] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.097 [2024-12-07 16:37:01.876921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.097 [2024-12-07 16:37:01.876979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.097 [2024-12-07 16:37:01.876993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.097 BaseBdev2 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.097 [ 00:11:03.097 { 00:11:03.097 "name": "BaseBdev2", 00:11:03.097 "aliases": [ 00:11:03.097 "ed544d55-59a7-4db9-a924-23b747bd1c3c" 00:11:03.097 ], 00:11:03.097 "product_name": "Malloc disk", 00:11:03.097 "block_size": 512, 00:11:03.097 "num_blocks": 65536, 00:11:03.097 "uuid": "ed544d55-59a7-4db9-a924-23b747bd1c3c", 00:11:03.097 "assigned_rate_limits": { 00:11:03.097 "rw_ios_per_sec": 0, 00:11:03.097 "rw_mbytes_per_sec": 0, 00:11:03.097 "r_mbytes_per_sec": 0, 00:11:03.097 "w_mbytes_per_sec": 0 00:11:03.097 }, 00:11:03.097 "claimed": false, 00:11:03.097 "zoned": false, 00:11:03.097 "supported_io_types": { 00:11:03.097 "read": true, 00:11:03.097 "write": true, 00:11:03.097 "unmap": true, 00:11:03.097 "flush": true, 00:11:03.097 "reset": true, 00:11:03.097 "nvme_admin": false, 00:11:03.097 "nvme_io": false, 00:11:03.097 "nvme_io_md": false, 00:11:03.097 "write_zeroes": true, 00:11:03.097 "zcopy": true, 00:11:03.097 "get_zone_info": false, 00:11:03.097 "zone_management": false, 00:11:03.097 "zone_append": false, 00:11:03.097 "compare": false, 00:11:03.097 "compare_and_write": false, 00:11:03.097 "abort": true, 00:11:03.097 "seek_hole": false, 00:11:03.097 "seek_data": false, 00:11:03.097 "copy": true, 00:11:03.097 "nvme_iov_md": false 00:11:03.097 }, 00:11:03.097 "memory_domains": [ 00:11:03.097 { 00:11:03.097 "dma_device_id": "system", 00:11:03.097 "dma_device_type": 1 00:11:03.097 }, 00:11:03.097 { 00:11:03.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.097 "dma_device_type": 2 00:11:03.097 } 00:11:03.097 ], 00:11:03.097 "driver_specific": {} 00:11:03.097 } 00:11:03.097 ] 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.097 16:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.358 BaseBdev3 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.358 [ 00:11:03.358 { 00:11:03.358 "name": "BaseBdev3", 00:11:03.358 "aliases": [ 00:11:03.358 "84ef520b-d633-4544-952b-e462272b0727" 00:11:03.358 ], 00:11:03.358 "product_name": "Malloc disk", 00:11:03.358 "block_size": 512, 00:11:03.358 "num_blocks": 65536, 00:11:03.358 "uuid": "84ef520b-d633-4544-952b-e462272b0727", 00:11:03.358 "assigned_rate_limits": { 00:11:03.358 "rw_ios_per_sec": 0, 00:11:03.358 "rw_mbytes_per_sec": 0, 00:11:03.358 "r_mbytes_per_sec": 0, 00:11:03.358 "w_mbytes_per_sec": 0 00:11:03.358 }, 00:11:03.358 "claimed": false, 00:11:03.358 "zoned": false, 00:11:03.358 "supported_io_types": { 00:11:03.358 "read": true, 00:11:03.358 "write": true, 00:11:03.358 "unmap": true, 00:11:03.358 "flush": true, 00:11:03.358 "reset": true, 00:11:03.358 "nvme_admin": false, 00:11:03.358 "nvme_io": false, 00:11:03.358 "nvme_io_md": false, 00:11:03.358 "write_zeroes": true, 00:11:03.358 "zcopy": true, 00:11:03.358 "get_zone_info": false, 00:11:03.358 "zone_management": false, 00:11:03.358 "zone_append": false, 00:11:03.358 "compare": false, 00:11:03.358 "compare_and_write": false, 00:11:03.358 "abort": true, 00:11:03.358 "seek_hole": false, 00:11:03.358 "seek_data": false, 00:11:03.358 "copy": true, 00:11:03.358 "nvme_iov_md": false 00:11:03.358 }, 00:11:03.358 "memory_domains": [ 00:11:03.358 { 00:11:03.358 "dma_device_id": "system", 00:11:03.358 "dma_device_type": 1 00:11:03.358 }, 00:11:03.358 { 00:11:03.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.358 "dma_device_type": 2 00:11:03.358 } 00:11:03.358 ], 00:11:03.358 "driver_specific": {} 00:11:03.358 } 00:11:03.358 ] 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.358 BaseBdev4 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.358 [ 00:11:03.358 { 00:11:03.358 "name": "BaseBdev4", 00:11:03.358 "aliases": [ 00:11:03.358 "be350eb0-91b9-4650-94d2-714ae671f92d" 00:11:03.358 ], 00:11:03.358 "product_name": "Malloc disk", 00:11:03.358 "block_size": 512, 00:11:03.358 "num_blocks": 65536, 00:11:03.358 "uuid": "be350eb0-91b9-4650-94d2-714ae671f92d", 00:11:03.358 "assigned_rate_limits": { 00:11:03.358 "rw_ios_per_sec": 0, 00:11:03.358 "rw_mbytes_per_sec": 0, 00:11:03.358 "r_mbytes_per_sec": 0, 00:11:03.358 "w_mbytes_per_sec": 0 00:11:03.358 }, 00:11:03.358 "claimed": false, 00:11:03.358 "zoned": false, 00:11:03.358 "supported_io_types": { 00:11:03.358 "read": true, 00:11:03.358 "write": true, 00:11:03.358 "unmap": true, 00:11:03.358 "flush": true, 00:11:03.358 "reset": true, 00:11:03.358 "nvme_admin": false, 00:11:03.358 "nvme_io": false, 00:11:03.358 "nvme_io_md": false, 00:11:03.358 "write_zeroes": true, 00:11:03.358 "zcopy": true, 00:11:03.358 "get_zone_info": false, 00:11:03.358 "zone_management": false, 00:11:03.358 "zone_append": false, 00:11:03.358 "compare": false, 00:11:03.358 "compare_and_write": false, 00:11:03.358 "abort": true, 00:11:03.358 "seek_hole": false, 00:11:03.358 "seek_data": false, 00:11:03.358 "copy": true, 00:11:03.358 "nvme_iov_md": false 00:11:03.358 }, 00:11:03.358 "memory_domains": [ 00:11:03.358 { 00:11:03.358 "dma_device_id": "system", 00:11:03.358 "dma_device_type": 1 00:11:03.358 }, 00:11:03.358 { 00:11:03.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.358 "dma_device_type": 2 00:11:03.358 } 00:11:03.358 ], 00:11:03.358 "driver_specific": {} 00:11:03.358 } 00:11:03.358 ] 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.358 [2024-12-07 16:37:02.122673] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:03.358 [2024-12-07 16:37:02.122736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:03.358 [2024-12-07 16:37:02.122759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.358 [2024-12-07 16:37:02.125005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.358 [2024-12-07 16:37:02.125054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.358 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.359 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.359 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.359 "name": "Existed_Raid", 00:11:03.359 "uuid": "52b74507-e343-495f-960c-7b360fca1f0f", 00:11:03.359 "strip_size_kb": 0, 00:11:03.359 "state": "configuring", 00:11:03.359 "raid_level": "raid1", 00:11:03.359 "superblock": true, 00:11:03.359 "num_base_bdevs": 4, 00:11:03.359 "num_base_bdevs_discovered": 3, 00:11:03.359 "num_base_bdevs_operational": 4, 00:11:03.359 "base_bdevs_list": [ 00:11:03.359 { 00:11:03.359 "name": "BaseBdev1", 00:11:03.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.359 "is_configured": false, 00:11:03.359 "data_offset": 0, 00:11:03.359 "data_size": 0 00:11:03.359 }, 00:11:03.359 { 00:11:03.359 "name": "BaseBdev2", 00:11:03.359 "uuid": "ed544d55-59a7-4db9-a924-23b747bd1c3c", 00:11:03.359 "is_configured": true, 00:11:03.359 "data_offset": 2048, 00:11:03.359 "data_size": 63488 00:11:03.359 }, 00:11:03.359 { 00:11:03.359 "name": "BaseBdev3", 00:11:03.359 "uuid": "84ef520b-d633-4544-952b-e462272b0727", 00:11:03.359 "is_configured": true, 00:11:03.359 "data_offset": 2048, 00:11:03.359 "data_size": 63488 00:11:03.359 }, 00:11:03.359 { 00:11:03.359 "name": "BaseBdev4", 00:11:03.359 "uuid": "be350eb0-91b9-4650-94d2-714ae671f92d", 00:11:03.359 "is_configured": true, 00:11:03.359 "data_offset": 2048, 00:11:03.359 "data_size": 63488 00:11:03.359 } 00:11:03.359 ] 00:11:03.359 }' 00:11:03.359 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.359 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.927 [2024-12-07 16:37:02.565985] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.927 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.928 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.928 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.928 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.928 "name": "Existed_Raid", 00:11:03.928 "uuid": "52b74507-e343-495f-960c-7b360fca1f0f", 00:11:03.928 "strip_size_kb": 0, 00:11:03.928 "state": "configuring", 00:11:03.928 "raid_level": "raid1", 00:11:03.928 "superblock": true, 00:11:03.928 "num_base_bdevs": 4, 00:11:03.928 "num_base_bdevs_discovered": 2, 00:11:03.928 "num_base_bdevs_operational": 4, 00:11:03.928 "base_bdevs_list": [ 00:11:03.928 { 00:11:03.928 "name": "BaseBdev1", 00:11:03.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.928 "is_configured": false, 00:11:03.928 "data_offset": 0, 00:11:03.928 "data_size": 0 00:11:03.928 }, 00:11:03.928 { 00:11:03.928 "name": null, 00:11:03.928 "uuid": "ed544d55-59a7-4db9-a924-23b747bd1c3c", 00:11:03.928 "is_configured": false, 00:11:03.928 "data_offset": 0, 00:11:03.928 "data_size": 63488 00:11:03.928 }, 00:11:03.928 { 00:11:03.928 "name": "BaseBdev3", 00:11:03.928 "uuid": "84ef520b-d633-4544-952b-e462272b0727", 00:11:03.928 "is_configured": true, 00:11:03.928 "data_offset": 2048, 00:11:03.928 "data_size": 63488 00:11:03.928 }, 00:11:03.928 { 00:11:03.928 "name": "BaseBdev4", 00:11:03.928 "uuid": "be350eb0-91b9-4650-94d2-714ae671f92d", 00:11:03.928 "is_configured": true, 00:11:03.928 "data_offset": 2048, 00:11:03.928 "data_size": 63488 00:11:03.928 } 00:11:03.928 ] 00:11:03.928 }' 00:11:03.928 16:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.928 16:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.187 [2024-12-07 16:37:03.074663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.187 BaseBdev1 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.187 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.446 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.446 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:04.446 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.446 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.446 [ 00:11:04.446 { 00:11:04.446 "name": "BaseBdev1", 00:11:04.446 "aliases": [ 00:11:04.446 "cfff6df4-cde2-47fe-97a4-6d14abf701c7" 00:11:04.446 ], 00:11:04.446 "product_name": "Malloc disk", 00:11:04.446 "block_size": 512, 00:11:04.446 "num_blocks": 65536, 00:11:04.446 "uuid": "cfff6df4-cde2-47fe-97a4-6d14abf701c7", 00:11:04.446 "assigned_rate_limits": { 00:11:04.446 "rw_ios_per_sec": 0, 00:11:04.446 "rw_mbytes_per_sec": 0, 00:11:04.446 "r_mbytes_per_sec": 0, 00:11:04.446 "w_mbytes_per_sec": 0 00:11:04.446 }, 00:11:04.446 "claimed": true, 00:11:04.446 "claim_type": "exclusive_write", 00:11:04.446 "zoned": false, 00:11:04.446 "supported_io_types": { 00:11:04.446 "read": true, 00:11:04.446 "write": true, 00:11:04.446 "unmap": true, 00:11:04.446 "flush": true, 00:11:04.446 "reset": true, 00:11:04.446 "nvme_admin": false, 00:11:04.446 "nvme_io": false, 00:11:04.446 "nvme_io_md": false, 00:11:04.446 "write_zeroes": true, 00:11:04.446 "zcopy": true, 00:11:04.446 "get_zone_info": false, 00:11:04.446 "zone_management": false, 00:11:04.446 "zone_append": false, 00:11:04.446 "compare": false, 00:11:04.446 "compare_and_write": false, 00:11:04.446 "abort": true, 00:11:04.446 "seek_hole": false, 00:11:04.446 "seek_data": false, 00:11:04.446 "copy": true, 00:11:04.446 "nvme_iov_md": false 00:11:04.446 }, 00:11:04.446 "memory_domains": [ 00:11:04.446 { 00:11:04.446 "dma_device_id": "system", 00:11:04.446 "dma_device_type": 1 00:11:04.446 }, 00:11:04.446 { 00:11:04.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.446 "dma_device_type": 2 00:11:04.446 } 00:11:04.446 ], 00:11:04.446 "driver_specific": {} 00:11:04.446 } 00:11:04.446 ] 00:11:04.446 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.446 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:04.446 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:04.446 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.446 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.446 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.446 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.447 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.447 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.447 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.447 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.447 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.447 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.447 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.447 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.447 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.447 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.447 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.447 "name": "Existed_Raid", 00:11:04.447 "uuid": "52b74507-e343-495f-960c-7b360fca1f0f", 00:11:04.447 "strip_size_kb": 0, 00:11:04.447 "state": "configuring", 00:11:04.447 "raid_level": "raid1", 00:11:04.447 "superblock": true, 00:11:04.447 "num_base_bdevs": 4, 00:11:04.447 "num_base_bdevs_discovered": 3, 00:11:04.447 "num_base_bdevs_operational": 4, 00:11:04.447 "base_bdevs_list": [ 00:11:04.447 { 00:11:04.447 "name": "BaseBdev1", 00:11:04.447 "uuid": "cfff6df4-cde2-47fe-97a4-6d14abf701c7", 00:11:04.447 "is_configured": true, 00:11:04.447 "data_offset": 2048, 00:11:04.447 "data_size": 63488 00:11:04.447 }, 00:11:04.447 { 00:11:04.447 "name": null, 00:11:04.447 "uuid": "ed544d55-59a7-4db9-a924-23b747bd1c3c", 00:11:04.447 "is_configured": false, 00:11:04.447 "data_offset": 0, 00:11:04.447 "data_size": 63488 00:11:04.447 }, 00:11:04.447 { 00:11:04.447 "name": "BaseBdev3", 00:11:04.447 "uuid": "84ef520b-d633-4544-952b-e462272b0727", 00:11:04.447 "is_configured": true, 00:11:04.447 "data_offset": 2048, 00:11:04.447 "data_size": 63488 00:11:04.447 }, 00:11:04.447 { 00:11:04.447 "name": "BaseBdev4", 00:11:04.447 "uuid": "be350eb0-91b9-4650-94d2-714ae671f92d", 00:11:04.447 "is_configured": true, 00:11:04.447 "data_offset": 2048, 00:11:04.447 "data_size": 63488 00:11:04.447 } 00:11:04.447 ] 00:11:04.447 }' 00:11:04.447 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.447 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.705 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.705 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:04.705 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.705 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.964 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.965 [2024-12-07 16:37:03.645774] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.965 "name": "Existed_Raid", 00:11:04.965 "uuid": "52b74507-e343-495f-960c-7b360fca1f0f", 00:11:04.965 "strip_size_kb": 0, 00:11:04.965 "state": "configuring", 00:11:04.965 "raid_level": "raid1", 00:11:04.965 "superblock": true, 00:11:04.965 "num_base_bdevs": 4, 00:11:04.965 "num_base_bdevs_discovered": 2, 00:11:04.965 "num_base_bdevs_operational": 4, 00:11:04.965 "base_bdevs_list": [ 00:11:04.965 { 00:11:04.965 "name": "BaseBdev1", 00:11:04.965 "uuid": "cfff6df4-cde2-47fe-97a4-6d14abf701c7", 00:11:04.965 "is_configured": true, 00:11:04.965 "data_offset": 2048, 00:11:04.965 "data_size": 63488 00:11:04.965 }, 00:11:04.965 { 00:11:04.965 "name": null, 00:11:04.965 "uuid": "ed544d55-59a7-4db9-a924-23b747bd1c3c", 00:11:04.965 "is_configured": false, 00:11:04.965 "data_offset": 0, 00:11:04.965 "data_size": 63488 00:11:04.965 }, 00:11:04.965 { 00:11:04.965 "name": null, 00:11:04.965 "uuid": "84ef520b-d633-4544-952b-e462272b0727", 00:11:04.965 "is_configured": false, 00:11:04.965 "data_offset": 0, 00:11:04.965 "data_size": 63488 00:11:04.965 }, 00:11:04.965 { 00:11:04.965 "name": "BaseBdev4", 00:11:04.965 "uuid": "be350eb0-91b9-4650-94d2-714ae671f92d", 00:11:04.965 "is_configured": true, 00:11:04.965 "data_offset": 2048, 00:11:04.965 "data_size": 63488 00:11:04.965 } 00:11:04.965 ] 00:11:04.965 }' 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.965 16:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.224 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.224 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.224 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:05.224 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.224 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.224 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:05.224 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:05.224 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.224 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.224 [2024-12-07 16:37:04.121032] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.483 "name": "Existed_Raid", 00:11:05.483 "uuid": "52b74507-e343-495f-960c-7b360fca1f0f", 00:11:05.483 "strip_size_kb": 0, 00:11:05.483 "state": "configuring", 00:11:05.483 "raid_level": "raid1", 00:11:05.483 "superblock": true, 00:11:05.483 "num_base_bdevs": 4, 00:11:05.483 "num_base_bdevs_discovered": 3, 00:11:05.483 "num_base_bdevs_operational": 4, 00:11:05.483 "base_bdevs_list": [ 00:11:05.483 { 00:11:05.483 "name": "BaseBdev1", 00:11:05.483 "uuid": "cfff6df4-cde2-47fe-97a4-6d14abf701c7", 00:11:05.483 "is_configured": true, 00:11:05.483 "data_offset": 2048, 00:11:05.483 "data_size": 63488 00:11:05.483 }, 00:11:05.483 { 00:11:05.483 "name": null, 00:11:05.483 "uuid": "ed544d55-59a7-4db9-a924-23b747bd1c3c", 00:11:05.483 "is_configured": false, 00:11:05.483 "data_offset": 0, 00:11:05.483 "data_size": 63488 00:11:05.483 }, 00:11:05.483 { 00:11:05.483 "name": "BaseBdev3", 00:11:05.483 "uuid": "84ef520b-d633-4544-952b-e462272b0727", 00:11:05.483 "is_configured": true, 00:11:05.483 "data_offset": 2048, 00:11:05.483 "data_size": 63488 00:11:05.483 }, 00:11:05.483 { 00:11:05.483 "name": "BaseBdev4", 00:11:05.483 "uuid": "be350eb0-91b9-4650-94d2-714ae671f92d", 00:11:05.483 "is_configured": true, 00:11:05.483 "data_offset": 2048, 00:11:05.483 "data_size": 63488 00:11:05.483 } 00:11:05.483 ] 00:11:05.483 }' 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.483 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.743 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:05.743 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.743 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.743 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.743 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.743 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:05.743 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:05.743 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.743 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.743 [2024-12-07 16:37:04.616196] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.002 "name": "Existed_Raid", 00:11:06.002 "uuid": "52b74507-e343-495f-960c-7b360fca1f0f", 00:11:06.002 "strip_size_kb": 0, 00:11:06.002 "state": "configuring", 00:11:06.002 "raid_level": "raid1", 00:11:06.002 "superblock": true, 00:11:06.002 "num_base_bdevs": 4, 00:11:06.002 "num_base_bdevs_discovered": 2, 00:11:06.002 "num_base_bdevs_operational": 4, 00:11:06.002 "base_bdevs_list": [ 00:11:06.002 { 00:11:06.002 "name": null, 00:11:06.002 "uuid": "cfff6df4-cde2-47fe-97a4-6d14abf701c7", 00:11:06.002 "is_configured": false, 00:11:06.002 "data_offset": 0, 00:11:06.002 "data_size": 63488 00:11:06.002 }, 00:11:06.002 { 00:11:06.002 "name": null, 00:11:06.002 "uuid": "ed544d55-59a7-4db9-a924-23b747bd1c3c", 00:11:06.002 "is_configured": false, 00:11:06.002 "data_offset": 0, 00:11:06.002 "data_size": 63488 00:11:06.002 }, 00:11:06.002 { 00:11:06.002 "name": "BaseBdev3", 00:11:06.002 "uuid": "84ef520b-d633-4544-952b-e462272b0727", 00:11:06.002 "is_configured": true, 00:11:06.002 "data_offset": 2048, 00:11:06.002 "data_size": 63488 00:11:06.002 }, 00:11:06.002 { 00:11:06.002 "name": "BaseBdev4", 00:11:06.002 "uuid": "be350eb0-91b9-4650-94d2-714ae671f92d", 00:11:06.002 "is_configured": true, 00:11:06.002 "data_offset": 2048, 00:11:06.002 "data_size": 63488 00:11:06.002 } 00:11:06.002 ] 00:11:06.002 }' 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.002 16:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.262 [2024-12-07 16:37:05.071935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.262 "name": "Existed_Raid", 00:11:06.262 "uuid": "52b74507-e343-495f-960c-7b360fca1f0f", 00:11:06.262 "strip_size_kb": 0, 00:11:06.262 "state": "configuring", 00:11:06.262 "raid_level": "raid1", 00:11:06.262 "superblock": true, 00:11:06.262 "num_base_bdevs": 4, 00:11:06.262 "num_base_bdevs_discovered": 3, 00:11:06.262 "num_base_bdevs_operational": 4, 00:11:06.262 "base_bdevs_list": [ 00:11:06.262 { 00:11:06.262 "name": null, 00:11:06.262 "uuid": "cfff6df4-cde2-47fe-97a4-6d14abf701c7", 00:11:06.262 "is_configured": false, 00:11:06.262 "data_offset": 0, 00:11:06.262 "data_size": 63488 00:11:06.262 }, 00:11:06.262 { 00:11:06.262 "name": "BaseBdev2", 00:11:06.262 "uuid": "ed544d55-59a7-4db9-a924-23b747bd1c3c", 00:11:06.262 "is_configured": true, 00:11:06.262 "data_offset": 2048, 00:11:06.262 "data_size": 63488 00:11:06.262 }, 00:11:06.262 { 00:11:06.262 "name": "BaseBdev3", 00:11:06.262 "uuid": "84ef520b-d633-4544-952b-e462272b0727", 00:11:06.262 "is_configured": true, 00:11:06.262 "data_offset": 2048, 00:11:06.262 "data_size": 63488 00:11:06.262 }, 00:11:06.262 { 00:11:06.262 "name": "BaseBdev4", 00:11:06.262 "uuid": "be350eb0-91b9-4650-94d2-714ae671f92d", 00:11:06.262 "is_configured": true, 00:11:06.262 "data_offset": 2048, 00:11:06.262 "data_size": 63488 00:11:06.262 } 00:11:06.262 ] 00:11:06.262 }' 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.262 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cfff6df4-cde2-47fe-97a4-6d14abf701c7 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.830 [2024-12-07 16:37:05.620519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:06.830 [2024-12-07 16:37:05.620756] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:06.830 [2024-12-07 16:37:05.620781] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:06.830 [2024-12-07 16:37:05.621083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:06.830 [2024-12-07 16:37:05.621251] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:06.830 [2024-12-07 16:37:05.621267] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:06.830 [2024-12-07 16:37:05.621405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.830 NewBaseBdev 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.830 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.830 [ 00:11:06.830 { 00:11:06.830 "name": "NewBaseBdev", 00:11:06.830 "aliases": [ 00:11:06.830 "cfff6df4-cde2-47fe-97a4-6d14abf701c7" 00:11:06.830 ], 00:11:06.830 "product_name": "Malloc disk", 00:11:06.830 "block_size": 512, 00:11:06.830 "num_blocks": 65536, 00:11:06.830 "uuid": "cfff6df4-cde2-47fe-97a4-6d14abf701c7", 00:11:06.830 "assigned_rate_limits": { 00:11:06.830 "rw_ios_per_sec": 0, 00:11:06.830 "rw_mbytes_per_sec": 0, 00:11:06.830 "r_mbytes_per_sec": 0, 00:11:06.830 "w_mbytes_per_sec": 0 00:11:06.830 }, 00:11:06.830 "claimed": true, 00:11:06.830 "claim_type": "exclusive_write", 00:11:06.830 "zoned": false, 00:11:06.830 "supported_io_types": { 00:11:06.830 "read": true, 00:11:06.830 "write": true, 00:11:06.830 "unmap": true, 00:11:06.830 "flush": true, 00:11:06.830 "reset": true, 00:11:06.830 "nvme_admin": false, 00:11:06.830 "nvme_io": false, 00:11:06.830 "nvme_io_md": false, 00:11:06.831 "write_zeroes": true, 00:11:06.831 "zcopy": true, 00:11:06.831 "get_zone_info": false, 00:11:06.831 "zone_management": false, 00:11:06.831 "zone_append": false, 00:11:06.831 "compare": false, 00:11:06.831 "compare_and_write": false, 00:11:06.831 "abort": true, 00:11:06.831 "seek_hole": false, 00:11:06.831 "seek_data": false, 00:11:06.831 "copy": true, 00:11:06.831 "nvme_iov_md": false 00:11:06.831 }, 00:11:06.831 "memory_domains": [ 00:11:06.831 { 00:11:06.831 "dma_device_id": "system", 00:11:06.831 "dma_device_type": 1 00:11:06.831 }, 00:11:06.831 { 00:11:06.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.831 "dma_device_type": 2 00:11:06.831 } 00:11:06.831 ], 00:11:06.831 "driver_specific": {} 00:11:06.831 } 00:11:06.831 ] 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.831 "name": "Existed_Raid", 00:11:06.831 "uuid": "52b74507-e343-495f-960c-7b360fca1f0f", 00:11:06.831 "strip_size_kb": 0, 00:11:06.831 "state": "online", 00:11:06.831 "raid_level": "raid1", 00:11:06.831 "superblock": true, 00:11:06.831 "num_base_bdevs": 4, 00:11:06.831 "num_base_bdevs_discovered": 4, 00:11:06.831 "num_base_bdevs_operational": 4, 00:11:06.831 "base_bdevs_list": [ 00:11:06.831 { 00:11:06.831 "name": "NewBaseBdev", 00:11:06.831 "uuid": "cfff6df4-cde2-47fe-97a4-6d14abf701c7", 00:11:06.831 "is_configured": true, 00:11:06.831 "data_offset": 2048, 00:11:06.831 "data_size": 63488 00:11:06.831 }, 00:11:06.831 { 00:11:06.831 "name": "BaseBdev2", 00:11:06.831 "uuid": "ed544d55-59a7-4db9-a924-23b747bd1c3c", 00:11:06.831 "is_configured": true, 00:11:06.831 "data_offset": 2048, 00:11:06.831 "data_size": 63488 00:11:06.831 }, 00:11:06.831 { 00:11:06.831 "name": "BaseBdev3", 00:11:06.831 "uuid": "84ef520b-d633-4544-952b-e462272b0727", 00:11:06.831 "is_configured": true, 00:11:06.831 "data_offset": 2048, 00:11:06.831 "data_size": 63488 00:11:06.831 }, 00:11:06.831 { 00:11:06.831 "name": "BaseBdev4", 00:11:06.831 "uuid": "be350eb0-91b9-4650-94d2-714ae671f92d", 00:11:06.831 "is_configured": true, 00:11:06.831 "data_offset": 2048, 00:11:06.831 "data_size": 63488 00:11:06.831 } 00:11:06.831 ] 00:11:06.831 }' 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.831 16:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.398 [2024-12-07 16:37:06.128103] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.398 "name": "Existed_Raid", 00:11:07.398 "aliases": [ 00:11:07.398 "52b74507-e343-495f-960c-7b360fca1f0f" 00:11:07.398 ], 00:11:07.398 "product_name": "Raid Volume", 00:11:07.398 "block_size": 512, 00:11:07.398 "num_blocks": 63488, 00:11:07.398 "uuid": "52b74507-e343-495f-960c-7b360fca1f0f", 00:11:07.398 "assigned_rate_limits": { 00:11:07.398 "rw_ios_per_sec": 0, 00:11:07.398 "rw_mbytes_per_sec": 0, 00:11:07.398 "r_mbytes_per_sec": 0, 00:11:07.398 "w_mbytes_per_sec": 0 00:11:07.398 }, 00:11:07.398 "claimed": false, 00:11:07.398 "zoned": false, 00:11:07.398 "supported_io_types": { 00:11:07.398 "read": true, 00:11:07.398 "write": true, 00:11:07.398 "unmap": false, 00:11:07.398 "flush": false, 00:11:07.398 "reset": true, 00:11:07.398 "nvme_admin": false, 00:11:07.398 "nvme_io": false, 00:11:07.398 "nvme_io_md": false, 00:11:07.398 "write_zeroes": true, 00:11:07.398 "zcopy": false, 00:11:07.398 "get_zone_info": false, 00:11:07.398 "zone_management": false, 00:11:07.398 "zone_append": false, 00:11:07.398 "compare": false, 00:11:07.398 "compare_and_write": false, 00:11:07.398 "abort": false, 00:11:07.398 "seek_hole": false, 00:11:07.398 "seek_data": false, 00:11:07.398 "copy": false, 00:11:07.398 "nvme_iov_md": false 00:11:07.398 }, 00:11:07.398 "memory_domains": [ 00:11:07.398 { 00:11:07.398 "dma_device_id": "system", 00:11:07.398 "dma_device_type": 1 00:11:07.398 }, 00:11:07.398 { 00:11:07.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.398 "dma_device_type": 2 00:11:07.398 }, 00:11:07.398 { 00:11:07.398 "dma_device_id": "system", 00:11:07.398 "dma_device_type": 1 00:11:07.398 }, 00:11:07.398 { 00:11:07.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.398 "dma_device_type": 2 00:11:07.398 }, 00:11:07.398 { 00:11:07.398 "dma_device_id": "system", 00:11:07.398 "dma_device_type": 1 00:11:07.398 }, 00:11:07.398 { 00:11:07.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.398 "dma_device_type": 2 00:11:07.398 }, 00:11:07.398 { 00:11:07.398 "dma_device_id": "system", 00:11:07.398 "dma_device_type": 1 00:11:07.398 }, 00:11:07.398 { 00:11:07.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.398 "dma_device_type": 2 00:11:07.398 } 00:11:07.398 ], 00:11:07.398 "driver_specific": { 00:11:07.398 "raid": { 00:11:07.398 "uuid": "52b74507-e343-495f-960c-7b360fca1f0f", 00:11:07.398 "strip_size_kb": 0, 00:11:07.398 "state": "online", 00:11:07.398 "raid_level": "raid1", 00:11:07.398 "superblock": true, 00:11:07.398 "num_base_bdevs": 4, 00:11:07.398 "num_base_bdevs_discovered": 4, 00:11:07.398 "num_base_bdevs_operational": 4, 00:11:07.398 "base_bdevs_list": [ 00:11:07.398 { 00:11:07.398 "name": "NewBaseBdev", 00:11:07.398 "uuid": "cfff6df4-cde2-47fe-97a4-6d14abf701c7", 00:11:07.398 "is_configured": true, 00:11:07.398 "data_offset": 2048, 00:11:07.398 "data_size": 63488 00:11:07.398 }, 00:11:07.398 { 00:11:07.398 "name": "BaseBdev2", 00:11:07.398 "uuid": "ed544d55-59a7-4db9-a924-23b747bd1c3c", 00:11:07.398 "is_configured": true, 00:11:07.398 "data_offset": 2048, 00:11:07.398 "data_size": 63488 00:11:07.398 }, 00:11:07.398 { 00:11:07.398 "name": "BaseBdev3", 00:11:07.398 "uuid": "84ef520b-d633-4544-952b-e462272b0727", 00:11:07.398 "is_configured": true, 00:11:07.398 "data_offset": 2048, 00:11:07.398 "data_size": 63488 00:11:07.398 }, 00:11:07.398 { 00:11:07.398 "name": "BaseBdev4", 00:11:07.398 "uuid": "be350eb0-91b9-4650-94d2-714ae671f92d", 00:11:07.398 "is_configured": true, 00:11:07.398 "data_offset": 2048, 00:11:07.398 "data_size": 63488 00:11:07.398 } 00:11:07.398 ] 00:11:07.398 } 00:11:07.398 } 00:11:07.398 }' 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:07.398 BaseBdev2 00:11:07.398 BaseBdev3 00:11:07.398 BaseBdev4' 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.398 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.657 [2024-12-07 16:37:06.471145] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:07.657 [2024-12-07 16:37:06.471184] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.657 [2024-12-07 16:37:06.471294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.657 [2024-12-07 16:37:06.471639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.657 [2024-12-07 16:37:06.471670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84950 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84950 ']' 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84950 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84950 00:11:07.657 killing process with pid 84950 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84950' 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84950 00:11:07.657 [2024-12-07 16:37:06.510957] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.657 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84950 00:11:07.915 [2024-12-07 16:37:06.591140] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.175 ************************************ 00:11:08.175 END TEST raid_state_function_test_sb 00:11:08.175 ************************************ 00:11:08.175 16:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:08.175 00:11:08.175 real 0m9.950s 00:11:08.175 user 0m16.615s 00:11:08.175 sys 0m2.093s 00:11:08.175 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.175 16:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.175 16:37:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:08.175 16:37:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:08.175 16:37:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.175 16:37:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.175 ************************************ 00:11:08.175 START TEST raid_superblock_test 00:11:08.175 ************************************ 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85606 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85606 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85606 ']' 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.175 16:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.433 [2024-12-07 16:37:07.145025] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:08.433 [2024-12-07 16:37:07.145904] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85606 ] 00:11:08.433 [2024-12-07 16:37:07.326130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.692 [2024-12-07 16:37:07.399787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.692 [2024-12-07 16:37:07.478012] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.692 [2024-12-07 16:37:07.478086] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.278 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.278 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:09.278 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:09.278 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.278 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:09.278 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:09.278 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:09.278 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.278 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.278 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.279 malloc1 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.279 [2024-12-07 16:37:08.041727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:09.279 [2024-12-07 16:37:08.041810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.279 [2024-12-07 16:37:08.041839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:09.279 [2024-12-07 16:37:08.041864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.279 [2024-12-07 16:37:08.044298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.279 [2024-12-07 16:37:08.044338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:09.279 pt1 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.279 malloc2 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.279 [2024-12-07 16:37:08.087712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.279 [2024-12-07 16:37:08.087777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.279 [2024-12-07 16:37:08.087796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:09.279 [2024-12-07 16:37:08.087809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.279 [2024-12-07 16:37:08.090508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.279 [2024-12-07 16:37:08.090549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.279 pt2 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.279 malloc3 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.279 [2024-12-07 16:37:08.118563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:09.279 [2024-12-07 16:37:08.118616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.279 [2024-12-07 16:37:08.118637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:09.279 [2024-12-07 16:37:08.118648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.279 [2024-12-07 16:37:08.120977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.279 [2024-12-07 16:37:08.121015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:09.279 pt3 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.279 malloc4 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.279 [2024-12-07 16:37:08.149041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:09.279 [2024-12-07 16:37:08.149091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.279 [2024-12-07 16:37:08.149106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:09.279 [2024-12-07 16:37:08.149121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.279 [2024-12-07 16:37:08.151583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.279 [2024-12-07 16:37:08.151620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:09.279 pt4 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.279 [2024-12-07 16:37:08.161123] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:09.279 [2024-12-07 16:37:08.163315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.279 [2024-12-07 16:37:08.163398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:09.279 [2024-12-07 16:37:08.163442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:09.279 [2024-12-07 16:37:08.163606] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:09.279 [2024-12-07 16:37:08.163626] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:09.279 [2024-12-07 16:37:08.163920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:09.279 [2024-12-07 16:37:08.164090] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:09.279 [2024-12-07 16:37:08.164108] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:09.279 [2024-12-07 16:37:08.164260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.279 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.535 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.535 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.535 "name": "raid_bdev1", 00:11:09.535 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:09.535 "strip_size_kb": 0, 00:11:09.535 "state": "online", 00:11:09.535 "raid_level": "raid1", 00:11:09.535 "superblock": true, 00:11:09.535 "num_base_bdevs": 4, 00:11:09.535 "num_base_bdevs_discovered": 4, 00:11:09.535 "num_base_bdevs_operational": 4, 00:11:09.535 "base_bdevs_list": [ 00:11:09.535 { 00:11:09.535 "name": "pt1", 00:11:09.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.535 "is_configured": true, 00:11:09.535 "data_offset": 2048, 00:11:09.535 "data_size": 63488 00:11:09.535 }, 00:11:09.535 { 00:11:09.535 "name": "pt2", 00:11:09.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.535 "is_configured": true, 00:11:09.535 "data_offset": 2048, 00:11:09.535 "data_size": 63488 00:11:09.535 }, 00:11:09.535 { 00:11:09.535 "name": "pt3", 00:11:09.535 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.535 "is_configured": true, 00:11:09.535 "data_offset": 2048, 00:11:09.535 "data_size": 63488 00:11:09.535 }, 00:11:09.535 { 00:11:09.535 "name": "pt4", 00:11:09.535 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:09.535 "is_configured": true, 00:11:09.535 "data_offset": 2048, 00:11:09.535 "data_size": 63488 00:11:09.535 } 00:11:09.535 ] 00:11:09.535 }' 00:11:09.535 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.535 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.794 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:09.794 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:09.794 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.794 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.794 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.794 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.794 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.794 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.794 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.794 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.794 [2024-12-07 16:37:08.616704] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.794 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.794 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.794 "name": "raid_bdev1", 00:11:09.794 "aliases": [ 00:11:09.794 "099ed012-bce6-434d-a60d-46967e1317e0" 00:11:09.794 ], 00:11:09.794 "product_name": "Raid Volume", 00:11:09.794 "block_size": 512, 00:11:09.794 "num_blocks": 63488, 00:11:09.794 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:09.794 "assigned_rate_limits": { 00:11:09.794 "rw_ios_per_sec": 0, 00:11:09.794 "rw_mbytes_per_sec": 0, 00:11:09.794 "r_mbytes_per_sec": 0, 00:11:09.794 "w_mbytes_per_sec": 0 00:11:09.794 }, 00:11:09.794 "claimed": false, 00:11:09.794 "zoned": false, 00:11:09.794 "supported_io_types": { 00:11:09.794 "read": true, 00:11:09.794 "write": true, 00:11:09.794 "unmap": false, 00:11:09.794 "flush": false, 00:11:09.794 "reset": true, 00:11:09.794 "nvme_admin": false, 00:11:09.794 "nvme_io": false, 00:11:09.794 "nvme_io_md": false, 00:11:09.794 "write_zeroes": true, 00:11:09.794 "zcopy": false, 00:11:09.794 "get_zone_info": false, 00:11:09.794 "zone_management": false, 00:11:09.794 "zone_append": false, 00:11:09.794 "compare": false, 00:11:09.794 "compare_and_write": false, 00:11:09.794 "abort": false, 00:11:09.794 "seek_hole": false, 00:11:09.794 "seek_data": false, 00:11:09.794 "copy": false, 00:11:09.794 "nvme_iov_md": false 00:11:09.794 }, 00:11:09.794 "memory_domains": [ 00:11:09.794 { 00:11:09.794 "dma_device_id": "system", 00:11:09.794 "dma_device_type": 1 00:11:09.794 }, 00:11:09.794 { 00:11:09.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.794 "dma_device_type": 2 00:11:09.794 }, 00:11:09.794 { 00:11:09.794 "dma_device_id": "system", 00:11:09.794 "dma_device_type": 1 00:11:09.794 }, 00:11:09.794 { 00:11:09.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.794 "dma_device_type": 2 00:11:09.794 }, 00:11:09.794 { 00:11:09.794 "dma_device_id": "system", 00:11:09.794 "dma_device_type": 1 00:11:09.794 }, 00:11:09.794 { 00:11:09.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.794 "dma_device_type": 2 00:11:09.794 }, 00:11:09.794 { 00:11:09.794 "dma_device_id": "system", 00:11:09.794 "dma_device_type": 1 00:11:09.794 }, 00:11:09.794 { 00:11:09.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.794 "dma_device_type": 2 00:11:09.794 } 00:11:09.794 ], 00:11:09.794 "driver_specific": { 00:11:09.794 "raid": { 00:11:09.794 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:09.794 "strip_size_kb": 0, 00:11:09.794 "state": "online", 00:11:09.794 "raid_level": "raid1", 00:11:09.794 "superblock": true, 00:11:09.794 "num_base_bdevs": 4, 00:11:09.794 "num_base_bdevs_discovered": 4, 00:11:09.794 "num_base_bdevs_operational": 4, 00:11:09.794 "base_bdevs_list": [ 00:11:09.794 { 00:11:09.794 "name": "pt1", 00:11:09.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.794 "is_configured": true, 00:11:09.794 "data_offset": 2048, 00:11:09.794 "data_size": 63488 00:11:09.794 }, 00:11:09.794 { 00:11:09.794 "name": "pt2", 00:11:09.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.794 "is_configured": true, 00:11:09.794 "data_offset": 2048, 00:11:09.794 "data_size": 63488 00:11:09.794 }, 00:11:09.794 { 00:11:09.794 "name": "pt3", 00:11:09.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.794 "is_configured": true, 00:11:09.794 "data_offset": 2048, 00:11:09.794 "data_size": 63488 00:11:09.794 }, 00:11:09.794 { 00:11:09.794 "name": "pt4", 00:11:09.794 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:09.794 "is_configured": true, 00:11:09.794 "data_offset": 2048, 00:11:09.794 "data_size": 63488 00:11:09.794 } 00:11:09.794 ] 00:11:09.794 } 00:11:09.794 } 00:11:09.794 }' 00:11:09.794 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:10.052 pt2 00:11:10.052 pt3 00:11:10.052 pt4' 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.052 [2024-12-07 16:37:08.920128] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.052 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.310 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=099ed012-bce6-434d-a60d-46967e1317e0 00:11:10.310 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 099ed012-bce6-434d-a60d-46967e1317e0 ']' 00:11:10.310 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.310 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.310 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.310 [2024-12-07 16:37:08.967700] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.310 [2024-12-07 16:37:08.967749] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.310 [2024-12-07 16:37:08.967860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.310 [2024-12-07 16:37:08.967976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.310 [2024-12-07 16:37:08.967988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:10.310 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.310 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:10.310 16:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.310 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.310 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.310 16:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.310 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.310 [2024-12-07 16:37:09.131545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:10.310 [2024-12-07 16:37:09.133806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:10.310 [2024-12-07 16:37:09.133866] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:10.310 [2024-12-07 16:37:09.133898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:10.310 [2024-12-07 16:37:09.133956] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:10.310 [2024-12-07 16:37:09.134024] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:10.310 [2024-12-07 16:37:09.134047] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:10.310 [2024-12-07 16:37:09.134064] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:10.310 [2024-12-07 16:37:09.134081] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.311 [2024-12-07 16:37:09.134091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:11:10.311 request: 00:11:10.311 { 00:11:10.311 "name": "raid_bdev1", 00:11:10.311 "raid_level": "raid1", 00:11:10.311 "base_bdevs": [ 00:11:10.311 "malloc1", 00:11:10.311 "malloc2", 00:11:10.311 "malloc3", 00:11:10.311 "malloc4" 00:11:10.311 ], 00:11:10.311 "superblock": false, 00:11:10.311 "method": "bdev_raid_create", 00:11:10.311 "req_id": 1 00:11:10.311 } 00:11:10.311 Got JSON-RPC error response 00:11:10.311 response: 00:11:10.311 { 00:11:10.311 "code": -17, 00:11:10.311 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:10.311 } 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.311 [2024-12-07 16:37:09.187280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:10.311 [2024-12-07 16:37:09.187333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.311 [2024-12-07 16:37:09.187369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:10.311 [2024-12-07 16:37:09.187378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.311 [2024-12-07 16:37:09.189854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.311 [2024-12-07 16:37:09.189887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:10.311 [2024-12-07 16:37:09.189974] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:10.311 [2024-12-07 16:37:09.190020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:10.311 pt1 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.311 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.568 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.568 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.568 "name": "raid_bdev1", 00:11:10.568 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:10.568 "strip_size_kb": 0, 00:11:10.568 "state": "configuring", 00:11:10.568 "raid_level": "raid1", 00:11:10.568 "superblock": true, 00:11:10.568 "num_base_bdevs": 4, 00:11:10.568 "num_base_bdevs_discovered": 1, 00:11:10.568 "num_base_bdevs_operational": 4, 00:11:10.568 "base_bdevs_list": [ 00:11:10.568 { 00:11:10.568 "name": "pt1", 00:11:10.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.568 "is_configured": true, 00:11:10.568 "data_offset": 2048, 00:11:10.568 "data_size": 63488 00:11:10.568 }, 00:11:10.568 { 00:11:10.568 "name": null, 00:11:10.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.568 "is_configured": false, 00:11:10.568 "data_offset": 2048, 00:11:10.568 "data_size": 63488 00:11:10.568 }, 00:11:10.568 { 00:11:10.568 "name": null, 00:11:10.568 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.568 "is_configured": false, 00:11:10.568 "data_offset": 2048, 00:11:10.568 "data_size": 63488 00:11:10.568 }, 00:11:10.568 { 00:11:10.568 "name": null, 00:11:10.568 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:10.568 "is_configured": false, 00:11:10.568 "data_offset": 2048, 00:11:10.568 "data_size": 63488 00:11:10.568 } 00:11:10.568 ] 00:11:10.568 }' 00:11:10.568 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.568 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.824 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:10.824 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:10.824 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.824 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.824 [2024-12-07 16:37:09.626677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:10.824 [2024-12-07 16:37:09.626762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.825 [2024-12-07 16:37:09.626791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:10.825 [2024-12-07 16:37:09.626802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.825 [2024-12-07 16:37:09.627384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.825 [2024-12-07 16:37:09.627411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:10.825 [2024-12-07 16:37:09.627514] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:10.825 [2024-12-07 16:37:09.627553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:10.825 pt2 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.825 [2024-12-07 16:37:09.638654] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.825 "name": "raid_bdev1", 00:11:10.825 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:10.825 "strip_size_kb": 0, 00:11:10.825 "state": "configuring", 00:11:10.825 "raid_level": "raid1", 00:11:10.825 "superblock": true, 00:11:10.825 "num_base_bdevs": 4, 00:11:10.825 "num_base_bdevs_discovered": 1, 00:11:10.825 "num_base_bdevs_operational": 4, 00:11:10.825 "base_bdevs_list": [ 00:11:10.825 { 00:11:10.825 "name": "pt1", 00:11:10.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.825 "is_configured": true, 00:11:10.825 "data_offset": 2048, 00:11:10.825 "data_size": 63488 00:11:10.825 }, 00:11:10.825 { 00:11:10.825 "name": null, 00:11:10.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.825 "is_configured": false, 00:11:10.825 "data_offset": 0, 00:11:10.825 "data_size": 63488 00:11:10.825 }, 00:11:10.825 { 00:11:10.825 "name": null, 00:11:10.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.825 "is_configured": false, 00:11:10.825 "data_offset": 2048, 00:11:10.825 "data_size": 63488 00:11:10.825 }, 00:11:10.825 { 00:11:10.825 "name": null, 00:11:10.825 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:10.825 "is_configured": false, 00:11:10.825 "data_offset": 2048, 00:11:10.825 "data_size": 63488 00:11:10.825 } 00:11:10.825 ] 00:11:10.825 }' 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.825 16:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.390 [2024-12-07 16:37:10.037997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:11.390 [2024-12-07 16:37:10.038093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.390 [2024-12-07 16:37:10.038118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:11.390 [2024-12-07 16:37:10.038132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.390 [2024-12-07 16:37:10.038670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.390 [2024-12-07 16:37:10.038700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:11.390 [2024-12-07 16:37:10.038798] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:11.390 [2024-12-07 16:37:10.038833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:11.390 pt2 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.390 [2024-12-07 16:37:10.049893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:11.390 [2024-12-07 16:37:10.049966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.390 [2024-12-07 16:37:10.049988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:11.390 [2024-12-07 16:37:10.050000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.390 [2024-12-07 16:37:10.050433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.390 [2024-12-07 16:37:10.050459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:11.390 [2024-12-07 16:37:10.050530] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:11.390 [2024-12-07 16:37:10.050554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:11.390 pt3 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.390 [2024-12-07 16:37:10.061866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:11.390 [2024-12-07 16:37:10.061921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.390 [2024-12-07 16:37:10.061937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:11.390 [2024-12-07 16:37:10.061948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.390 [2024-12-07 16:37:10.062313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.390 [2024-12-07 16:37:10.062350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:11.390 [2024-12-07 16:37:10.062412] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:11.390 [2024-12-07 16:37:10.062433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:11.390 [2024-12-07 16:37:10.062560] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:11.390 [2024-12-07 16:37:10.062580] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:11.390 [2024-12-07 16:37:10.062850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:11.390 [2024-12-07 16:37:10.063002] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:11.390 [2024-12-07 16:37:10.063019] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:11.390 [2024-12-07 16:37:10.063137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.390 pt4 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.390 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.390 "name": "raid_bdev1", 00:11:11.390 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:11.390 "strip_size_kb": 0, 00:11:11.390 "state": "online", 00:11:11.390 "raid_level": "raid1", 00:11:11.390 "superblock": true, 00:11:11.390 "num_base_bdevs": 4, 00:11:11.390 "num_base_bdevs_discovered": 4, 00:11:11.390 "num_base_bdevs_operational": 4, 00:11:11.390 "base_bdevs_list": [ 00:11:11.390 { 00:11:11.390 "name": "pt1", 00:11:11.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.390 "is_configured": true, 00:11:11.390 "data_offset": 2048, 00:11:11.390 "data_size": 63488 00:11:11.390 }, 00:11:11.390 { 00:11:11.390 "name": "pt2", 00:11:11.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.390 "is_configured": true, 00:11:11.390 "data_offset": 2048, 00:11:11.390 "data_size": 63488 00:11:11.390 }, 00:11:11.390 { 00:11:11.390 "name": "pt3", 00:11:11.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.390 "is_configured": true, 00:11:11.390 "data_offset": 2048, 00:11:11.391 "data_size": 63488 00:11:11.391 }, 00:11:11.391 { 00:11:11.391 "name": "pt4", 00:11:11.391 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:11.391 "is_configured": true, 00:11:11.391 "data_offset": 2048, 00:11:11.391 "data_size": 63488 00:11:11.391 } 00:11:11.391 ] 00:11:11.391 }' 00:11:11.391 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.391 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.648 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:11.648 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:11.648 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.648 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.648 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.648 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.648 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:11.648 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.648 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.648 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.648 [2024-12-07 16:37:10.493695] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.648 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.648 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.648 "name": "raid_bdev1", 00:11:11.648 "aliases": [ 00:11:11.648 "099ed012-bce6-434d-a60d-46967e1317e0" 00:11:11.648 ], 00:11:11.648 "product_name": "Raid Volume", 00:11:11.648 "block_size": 512, 00:11:11.648 "num_blocks": 63488, 00:11:11.648 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:11.648 "assigned_rate_limits": { 00:11:11.648 "rw_ios_per_sec": 0, 00:11:11.648 "rw_mbytes_per_sec": 0, 00:11:11.648 "r_mbytes_per_sec": 0, 00:11:11.648 "w_mbytes_per_sec": 0 00:11:11.648 }, 00:11:11.648 "claimed": false, 00:11:11.648 "zoned": false, 00:11:11.648 "supported_io_types": { 00:11:11.648 "read": true, 00:11:11.648 "write": true, 00:11:11.648 "unmap": false, 00:11:11.648 "flush": false, 00:11:11.648 "reset": true, 00:11:11.648 "nvme_admin": false, 00:11:11.648 "nvme_io": false, 00:11:11.648 "nvme_io_md": false, 00:11:11.648 "write_zeroes": true, 00:11:11.648 "zcopy": false, 00:11:11.648 "get_zone_info": false, 00:11:11.648 "zone_management": false, 00:11:11.648 "zone_append": false, 00:11:11.648 "compare": false, 00:11:11.648 "compare_and_write": false, 00:11:11.648 "abort": false, 00:11:11.648 "seek_hole": false, 00:11:11.648 "seek_data": false, 00:11:11.648 "copy": false, 00:11:11.648 "nvme_iov_md": false 00:11:11.648 }, 00:11:11.648 "memory_domains": [ 00:11:11.648 { 00:11:11.648 "dma_device_id": "system", 00:11:11.648 "dma_device_type": 1 00:11:11.648 }, 00:11:11.648 { 00:11:11.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.648 "dma_device_type": 2 00:11:11.648 }, 00:11:11.648 { 00:11:11.648 "dma_device_id": "system", 00:11:11.648 "dma_device_type": 1 00:11:11.648 }, 00:11:11.648 { 00:11:11.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.648 "dma_device_type": 2 00:11:11.648 }, 00:11:11.648 { 00:11:11.648 "dma_device_id": "system", 00:11:11.648 "dma_device_type": 1 00:11:11.648 }, 00:11:11.648 { 00:11:11.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.648 "dma_device_type": 2 00:11:11.648 }, 00:11:11.648 { 00:11:11.648 "dma_device_id": "system", 00:11:11.648 "dma_device_type": 1 00:11:11.648 }, 00:11:11.648 { 00:11:11.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.648 "dma_device_type": 2 00:11:11.648 } 00:11:11.648 ], 00:11:11.648 "driver_specific": { 00:11:11.648 "raid": { 00:11:11.648 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:11.648 "strip_size_kb": 0, 00:11:11.648 "state": "online", 00:11:11.648 "raid_level": "raid1", 00:11:11.648 "superblock": true, 00:11:11.648 "num_base_bdevs": 4, 00:11:11.648 "num_base_bdevs_discovered": 4, 00:11:11.648 "num_base_bdevs_operational": 4, 00:11:11.648 "base_bdevs_list": [ 00:11:11.648 { 00:11:11.648 "name": "pt1", 00:11:11.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.648 "is_configured": true, 00:11:11.648 "data_offset": 2048, 00:11:11.648 "data_size": 63488 00:11:11.648 }, 00:11:11.648 { 00:11:11.648 "name": "pt2", 00:11:11.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.648 "is_configured": true, 00:11:11.648 "data_offset": 2048, 00:11:11.648 "data_size": 63488 00:11:11.648 }, 00:11:11.648 { 00:11:11.648 "name": "pt3", 00:11:11.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.648 "is_configured": true, 00:11:11.648 "data_offset": 2048, 00:11:11.648 "data_size": 63488 00:11:11.648 }, 00:11:11.648 { 00:11:11.648 "name": "pt4", 00:11:11.648 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:11.648 "is_configured": true, 00:11:11.648 "data_offset": 2048, 00:11:11.648 "data_size": 63488 00:11:11.648 } 00:11:11.648 ] 00:11:11.648 } 00:11:11.648 } 00:11:11.648 }' 00:11:11.649 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:11.906 pt2 00:11:11.906 pt3 00:11:11.906 pt4' 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:11.906 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:11.907 [2024-12-07 16:37:10.773103] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 099ed012-bce6-434d-a60d-46967e1317e0 '!=' 099ed012-bce6-434d-a60d-46967e1317e0 ']' 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.907 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.165 [2024-12-07 16:37:10.808774] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.165 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.165 "name": "raid_bdev1", 00:11:12.165 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:12.165 "strip_size_kb": 0, 00:11:12.165 "state": "online", 00:11:12.165 "raid_level": "raid1", 00:11:12.165 "superblock": true, 00:11:12.165 "num_base_bdevs": 4, 00:11:12.165 "num_base_bdevs_discovered": 3, 00:11:12.165 "num_base_bdevs_operational": 3, 00:11:12.165 "base_bdevs_list": [ 00:11:12.165 { 00:11:12.165 "name": null, 00:11:12.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.165 "is_configured": false, 00:11:12.165 "data_offset": 0, 00:11:12.165 "data_size": 63488 00:11:12.165 }, 00:11:12.165 { 00:11:12.165 "name": "pt2", 00:11:12.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.165 "is_configured": true, 00:11:12.165 "data_offset": 2048, 00:11:12.165 "data_size": 63488 00:11:12.165 }, 00:11:12.165 { 00:11:12.165 "name": "pt3", 00:11:12.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:12.165 "is_configured": true, 00:11:12.165 "data_offset": 2048, 00:11:12.165 "data_size": 63488 00:11:12.165 }, 00:11:12.165 { 00:11:12.166 "name": "pt4", 00:11:12.166 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:12.166 "is_configured": true, 00:11:12.166 "data_offset": 2048, 00:11:12.166 "data_size": 63488 00:11:12.166 } 00:11:12.166 ] 00:11:12.166 }' 00:11:12.166 16:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.166 16:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.425 [2024-12-07 16:37:11.232000] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.425 [2024-12-07 16:37:11.232040] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.425 [2024-12-07 16:37:11.232152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.425 [2024-12-07 16:37:11.232246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.425 [2024-12-07 16:37:11.232266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.425 [2024-12-07 16:37:11.315825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:12.425 [2024-12-07 16:37:11.315907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.425 [2024-12-07 16:37:11.315930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:12.425 [2024-12-07 16:37:11.315943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.425 [2024-12-07 16:37:11.318607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.425 [2024-12-07 16:37:11.318646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:12.425 [2024-12-07 16:37:11.318731] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:12.425 [2024-12-07 16:37:11.318773] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:12.425 pt2 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.425 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.684 "name": "raid_bdev1", 00:11:12.684 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:12.684 "strip_size_kb": 0, 00:11:12.684 "state": "configuring", 00:11:12.684 "raid_level": "raid1", 00:11:12.684 "superblock": true, 00:11:12.684 "num_base_bdevs": 4, 00:11:12.684 "num_base_bdevs_discovered": 1, 00:11:12.684 "num_base_bdevs_operational": 3, 00:11:12.684 "base_bdevs_list": [ 00:11:12.684 { 00:11:12.684 "name": null, 00:11:12.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.684 "is_configured": false, 00:11:12.684 "data_offset": 2048, 00:11:12.684 "data_size": 63488 00:11:12.684 }, 00:11:12.684 { 00:11:12.684 "name": "pt2", 00:11:12.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.684 "is_configured": true, 00:11:12.684 "data_offset": 2048, 00:11:12.684 "data_size": 63488 00:11:12.684 }, 00:11:12.684 { 00:11:12.684 "name": null, 00:11:12.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:12.684 "is_configured": false, 00:11:12.684 "data_offset": 2048, 00:11:12.684 "data_size": 63488 00:11:12.684 }, 00:11:12.684 { 00:11:12.684 "name": null, 00:11:12.684 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:12.684 "is_configured": false, 00:11:12.684 "data_offset": 2048, 00:11:12.684 "data_size": 63488 00:11:12.684 } 00:11:12.684 ] 00:11:12.684 }' 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.684 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.943 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:12.943 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:12.943 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:12.943 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.943 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.944 [2024-12-07 16:37:11.751185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:12.944 [2024-12-07 16:37:11.751277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.944 [2024-12-07 16:37:11.751303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:12.944 [2024-12-07 16:37:11.751319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.944 [2024-12-07 16:37:11.751862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.944 [2024-12-07 16:37:11.751896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:12.944 [2024-12-07 16:37:11.751992] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:12.944 [2024-12-07 16:37:11.752024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:12.944 pt3 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.944 "name": "raid_bdev1", 00:11:12.944 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:12.944 "strip_size_kb": 0, 00:11:12.944 "state": "configuring", 00:11:12.944 "raid_level": "raid1", 00:11:12.944 "superblock": true, 00:11:12.944 "num_base_bdevs": 4, 00:11:12.944 "num_base_bdevs_discovered": 2, 00:11:12.944 "num_base_bdevs_operational": 3, 00:11:12.944 "base_bdevs_list": [ 00:11:12.944 { 00:11:12.944 "name": null, 00:11:12.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.944 "is_configured": false, 00:11:12.944 "data_offset": 2048, 00:11:12.944 "data_size": 63488 00:11:12.944 }, 00:11:12.944 { 00:11:12.944 "name": "pt2", 00:11:12.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.944 "is_configured": true, 00:11:12.944 "data_offset": 2048, 00:11:12.944 "data_size": 63488 00:11:12.944 }, 00:11:12.944 { 00:11:12.944 "name": "pt3", 00:11:12.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:12.944 "is_configured": true, 00:11:12.944 "data_offset": 2048, 00:11:12.944 "data_size": 63488 00:11:12.944 }, 00:11:12.944 { 00:11:12.944 "name": null, 00:11:12.944 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:12.944 "is_configured": false, 00:11:12.944 "data_offset": 2048, 00:11:12.944 "data_size": 63488 00:11:12.944 } 00:11:12.944 ] 00:11:12.944 }' 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.944 16:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.511 [2024-12-07 16:37:12.178520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:13.511 [2024-12-07 16:37:12.178614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.511 [2024-12-07 16:37:12.178642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:13.511 [2024-12-07 16:37:12.178654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.511 [2024-12-07 16:37:12.179176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.511 [2024-12-07 16:37:12.179209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:13.511 [2024-12-07 16:37:12.179306] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:13.511 [2024-12-07 16:37:12.179360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:13.511 [2024-12-07 16:37:12.179492] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:13.511 [2024-12-07 16:37:12.179511] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:13.511 [2024-12-07 16:37:12.179812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:13.511 [2024-12-07 16:37:12.179962] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:13.511 [2024-12-07 16:37:12.179978] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:11:13.511 [2024-12-07 16:37:12.180103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.511 pt4 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.511 "name": "raid_bdev1", 00:11:13.511 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:13.511 "strip_size_kb": 0, 00:11:13.511 "state": "online", 00:11:13.511 "raid_level": "raid1", 00:11:13.511 "superblock": true, 00:11:13.511 "num_base_bdevs": 4, 00:11:13.511 "num_base_bdevs_discovered": 3, 00:11:13.511 "num_base_bdevs_operational": 3, 00:11:13.511 "base_bdevs_list": [ 00:11:13.511 { 00:11:13.511 "name": null, 00:11:13.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.511 "is_configured": false, 00:11:13.511 "data_offset": 2048, 00:11:13.511 "data_size": 63488 00:11:13.511 }, 00:11:13.511 { 00:11:13.511 "name": "pt2", 00:11:13.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:13.511 "is_configured": true, 00:11:13.511 "data_offset": 2048, 00:11:13.511 "data_size": 63488 00:11:13.511 }, 00:11:13.511 { 00:11:13.511 "name": "pt3", 00:11:13.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:13.511 "is_configured": true, 00:11:13.511 "data_offset": 2048, 00:11:13.511 "data_size": 63488 00:11:13.511 }, 00:11:13.511 { 00:11:13.511 "name": "pt4", 00:11:13.511 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:13.511 "is_configured": true, 00:11:13.511 "data_offset": 2048, 00:11:13.511 "data_size": 63488 00:11:13.511 } 00:11:13.511 ] 00:11:13.511 }' 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.511 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.771 [2024-12-07 16:37:12.549896] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:13.771 [2024-12-07 16:37:12.549936] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.771 [2024-12-07 16:37:12.550045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.771 [2024-12-07 16:37:12.550134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.771 [2024-12-07 16:37:12.550146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.771 [2024-12-07 16:37:12.613780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:13.771 [2024-12-07 16:37:12.613853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.771 [2024-12-07 16:37:12.613880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:13.771 [2024-12-07 16:37:12.613890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.771 [2024-12-07 16:37:12.616607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.771 [2024-12-07 16:37:12.616644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:13.771 [2024-12-07 16:37:12.616741] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:13.771 [2024-12-07 16:37:12.616790] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:13.771 [2024-12-07 16:37:12.616922] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:13.771 [2024-12-07 16:37:12.616943] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:13.771 [2024-12-07 16:37:12.616965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:11:13.771 [2024-12-07 16:37:12.617007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:13.771 [2024-12-07 16:37:12.617114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:13.771 pt1 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.771 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.030 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.030 "name": "raid_bdev1", 00:11:14.030 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:14.030 "strip_size_kb": 0, 00:11:14.030 "state": "configuring", 00:11:14.030 "raid_level": "raid1", 00:11:14.030 "superblock": true, 00:11:14.030 "num_base_bdevs": 4, 00:11:14.030 "num_base_bdevs_discovered": 2, 00:11:14.030 "num_base_bdevs_operational": 3, 00:11:14.030 "base_bdevs_list": [ 00:11:14.030 { 00:11:14.030 "name": null, 00:11:14.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.030 "is_configured": false, 00:11:14.030 "data_offset": 2048, 00:11:14.030 "data_size": 63488 00:11:14.030 }, 00:11:14.030 { 00:11:14.030 "name": "pt2", 00:11:14.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.030 "is_configured": true, 00:11:14.030 "data_offset": 2048, 00:11:14.030 "data_size": 63488 00:11:14.030 }, 00:11:14.030 { 00:11:14.030 "name": "pt3", 00:11:14.030 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.030 "is_configured": true, 00:11:14.030 "data_offset": 2048, 00:11:14.030 "data_size": 63488 00:11:14.030 }, 00:11:14.030 { 00:11:14.030 "name": null, 00:11:14.030 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:14.030 "is_configured": false, 00:11:14.030 "data_offset": 2048, 00:11:14.030 "data_size": 63488 00:11:14.030 } 00:11:14.030 ] 00:11:14.030 }' 00:11:14.030 16:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.030 16:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.289 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:14.289 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:14.289 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.289 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.289 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.289 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:14.289 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:14.289 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.289 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.289 [2024-12-07 16:37:13.100952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:14.289 [2024-12-07 16:37:13.101037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.289 [2024-12-07 16:37:13.101062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:14.289 [2024-12-07 16:37:13.101075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.289 [2024-12-07 16:37:13.101618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.289 [2024-12-07 16:37:13.101648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:14.289 [2024-12-07 16:37:13.101743] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:14.289 [2024-12-07 16:37:13.101775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:14.289 [2024-12-07 16:37:13.101893] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:14.289 [2024-12-07 16:37:13.101912] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:14.289 [2024-12-07 16:37:13.102187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:14.289 [2024-12-07 16:37:13.102329] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:14.290 [2024-12-07 16:37:13.102353] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:14.290 [2024-12-07 16:37:13.102497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.290 pt4 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.290 "name": "raid_bdev1", 00:11:14.290 "uuid": "099ed012-bce6-434d-a60d-46967e1317e0", 00:11:14.290 "strip_size_kb": 0, 00:11:14.290 "state": "online", 00:11:14.290 "raid_level": "raid1", 00:11:14.290 "superblock": true, 00:11:14.290 "num_base_bdevs": 4, 00:11:14.290 "num_base_bdevs_discovered": 3, 00:11:14.290 "num_base_bdevs_operational": 3, 00:11:14.290 "base_bdevs_list": [ 00:11:14.290 { 00:11:14.290 "name": null, 00:11:14.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.290 "is_configured": false, 00:11:14.290 "data_offset": 2048, 00:11:14.290 "data_size": 63488 00:11:14.290 }, 00:11:14.290 { 00:11:14.290 "name": "pt2", 00:11:14.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.290 "is_configured": true, 00:11:14.290 "data_offset": 2048, 00:11:14.290 "data_size": 63488 00:11:14.290 }, 00:11:14.290 { 00:11:14.290 "name": "pt3", 00:11:14.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.290 "is_configured": true, 00:11:14.290 "data_offset": 2048, 00:11:14.290 "data_size": 63488 00:11:14.290 }, 00:11:14.290 { 00:11:14.290 "name": "pt4", 00:11:14.290 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:14.290 "is_configured": true, 00:11:14.290 "data_offset": 2048, 00:11:14.290 "data_size": 63488 00:11:14.290 } 00:11:14.290 ] 00:11:14.290 }' 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.290 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:14.859 [2024-12-07 16:37:13.608453] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 099ed012-bce6-434d-a60d-46967e1317e0 '!=' 099ed012-bce6-434d-a60d-46967e1317e0 ']' 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85606 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85606 ']' 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85606 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85606 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85606' 00:11:14.859 killing process with pid 85606 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85606 00:11:14.859 [2024-12-07 16:37:13.693089] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.859 [2024-12-07 16:37:13.693221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.859 16:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85606 00:11:14.859 [2024-12-07 16:37:13.693319] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.859 [2024-12-07 16:37:13.693333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:15.118 [2024-12-07 16:37:13.777459] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.378 16:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:15.378 00:11:15.378 real 0m7.125s 00:11:15.378 user 0m11.656s 00:11:15.378 sys 0m1.613s 00:11:15.378 16:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.378 16:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.378 ************************************ 00:11:15.378 END TEST raid_superblock_test 00:11:15.378 ************************************ 00:11:15.378 16:37:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:15.378 16:37:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:15.378 16:37:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.378 16:37:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.378 ************************************ 00:11:15.378 START TEST raid_read_error_test 00:11:15.378 ************************************ 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1oMOrVNMip 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86082 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86082 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 86082 ']' 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.378 16:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.638 [2024-12-07 16:37:14.353623] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:15.638 [2024-12-07 16:37:14.353783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86082 ] 00:11:15.638 [2024-12-07 16:37:14.519586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.898 [2024-12-07 16:37:14.597133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.898 [2024-12-07 16:37:14.678319] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.898 [2024-12-07 16:37:14.678369] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.466 BaseBdev1_malloc 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.466 true 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.466 [2024-12-07 16:37:15.232419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:16.466 [2024-12-07 16:37:15.232497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.466 [2024-12-07 16:37:15.232521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:16.466 [2024-12-07 16:37:15.232530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.466 [2024-12-07 16:37:15.235058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.466 [2024-12-07 16:37:15.235095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:16.466 BaseBdev1 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.466 BaseBdev2_malloc 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.466 true 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.466 [2024-12-07 16:37:15.293184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:16.466 [2024-12-07 16:37:15.293245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.466 [2024-12-07 16:37:15.293265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:16.466 [2024-12-07 16:37:15.293275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.466 [2024-12-07 16:37:15.295721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.466 [2024-12-07 16:37:15.295757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:16.466 BaseBdev2 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.466 BaseBdev3_malloc 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.466 true 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.466 [2024-12-07 16:37:15.340898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:16.466 [2024-12-07 16:37:15.340955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.466 [2024-12-07 16:37:15.340976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:16.466 [2024-12-07 16:37:15.340986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.466 [2024-12-07 16:37:15.343420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.466 [2024-12-07 16:37:15.343454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:16.466 BaseBdev3 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.466 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:16.467 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:16.467 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.467 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.724 BaseBdev4_malloc 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.724 true 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.724 [2024-12-07 16:37:15.388684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:16.724 [2024-12-07 16:37:15.388740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.724 [2024-12-07 16:37:15.388764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:16.724 [2024-12-07 16:37:15.388774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.724 [2024-12-07 16:37:15.391204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.724 [2024-12-07 16:37:15.391238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:16.724 BaseBdev4 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.724 [2024-12-07 16:37:15.400718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.724 [2024-12-07 16:37:15.402867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.724 [2024-12-07 16:37:15.402968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.724 [2024-12-07 16:37:15.403042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.724 [2024-12-07 16:37:15.403252] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:16.724 [2024-12-07 16:37:15.403270] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:16.724 [2024-12-07 16:37:15.403562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:16.724 [2024-12-07 16:37:15.403715] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:16.724 [2024-12-07 16:37:15.403744] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:16.724 [2024-12-07 16:37:15.403886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.724 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.724 "name": "raid_bdev1", 00:11:16.724 "uuid": "3e639db9-b02f-4766-a556-4a25291e376e", 00:11:16.724 "strip_size_kb": 0, 00:11:16.724 "state": "online", 00:11:16.724 "raid_level": "raid1", 00:11:16.724 "superblock": true, 00:11:16.724 "num_base_bdevs": 4, 00:11:16.724 "num_base_bdevs_discovered": 4, 00:11:16.725 "num_base_bdevs_operational": 4, 00:11:16.725 "base_bdevs_list": [ 00:11:16.725 { 00:11:16.725 "name": "BaseBdev1", 00:11:16.725 "uuid": "62a9e105-d127-5ea0-8492-1037f91525a3", 00:11:16.725 "is_configured": true, 00:11:16.725 "data_offset": 2048, 00:11:16.725 "data_size": 63488 00:11:16.725 }, 00:11:16.725 { 00:11:16.725 "name": "BaseBdev2", 00:11:16.725 "uuid": "7e09f6a7-07dd-54b8-b648-cc9f68dc42c7", 00:11:16.725 "is_configured": true, 00:11:16.725 "data_offset": 2048, 00:11:16.725 "data_size": 63488 00:11:16.725 }, 00:11:16.725 { 00:11:16.725 "name": "BaseBdev3", 00:11:16.725 "uuid": "75386dcf-d420-5f4c-8c60-fd593bded1ee", 00:11:16.725 "is_configured": true, 00:11:16.725 "data_offset": 2048, 00:11:16.725 "data_size": 63488 00:11:16.725 }, 00:11:16.725 { 00:11:16.725 "name": "BaseBdev4", 00:11:16.725 "uuid": "6511afd6-a4a5-5c40-969b-a667c3873510", 00:11:16.725 "is_configured": true, 00:11:16.725 "data_offset": 2048, 00:11:16.725 "data_size": 63488 00:11:16.725 } 00:11:16.725 ] 00:11:16.725 }' 00:11:16.725 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.725 16:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.982 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:16.982 16:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:17.241 [2024-12-07 16:37:15.952294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.178 "name": "raid_bdev1", 00:11:18.178 "uuid": "3e639db9-b02f-4766-a556-4a25291e376e", 00:11:18.178 "strip_size_kb": 0, 00:11:18.178 "state": "online", 00:11:18.178 "raid_level": "raid1", 00:11:18.178 "superblock": true, 00:11:18.178 "num_base_bdevs": 4, 00:11:18.178 "num_base_bdevs_discovered": 4, 00:11:18.178 "num_base_bdevs_operational": 4, 00:11:18.178 "base_bdevs_list": [ 00:11:18.178 { 00:11:18.178 "name": "BaseBdev1", 00:11:18.178 "uuid": "62a9e105-d127-5ea0-8492-1037f91525a3", 00:11:18.178 "is_configured": true, 00:11:18.178 "data_offset": 2048, 00:11:18.178 "data_size": 63488 00:11:18.178 }, 00:11:18.178 { 00:11:18.178 "name": "BaseBdev2", 00:11:18.178 "uuid": "7e09f6a7-07dd-54b8-b648-cc9f68dc42c7", 00:11:18.178 "is_configured": true, 00:11:18.178 "data_offset": 2048, 00:11:18.178 "data_size": 63488 00:11:18.178 }, 00:11:18.178 { 00:11:18.178 "name": "BaseBdev3", 00:11:18.178 "uuid": "75386dcf-d420-5f4c-8c60-fd593bded1ee", 00:11:18.178 "is_configured": true, 00:11:18.178 "data_offset": 2048, 00:11:18.178 "data_size": 63488 00:11:18.178 }, 00:11:18.178 { 00:11:18.178 "name": "BaseBdev4", 00:11:18.178 "uuid": "6511afd6-a4a5-5c40-969b-a667c3873510", 00:11:18.178 "is_configured": true, 00:11:18.178 "data_offset": 2048, 00:11:18.178 "data_size": 63488 00:11:18.178 } 00:11:18.178 ] 00:11:18.178 }' 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.178 16:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.438 16:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:18.438 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.438 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.438 [2024-12-07 16:37:17.325900] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.438 [2024-12-07 16:37:17.325944] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.438 [2024-12-07 16:37:17.328748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.438 [2024-12-07 16:37:17.328897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.438 [2024-12-07 16:37:17.329047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.438 [2024-12-07 16:37:17.329059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:18.438 { 00:11:18.438 "results": [ 00:11:18.438 { 00:11:18.438 "job": "raid_bdev1", 00:11:18.438 "core_mask": "0x1", 00:11:18.438 "workload": "randrw", 00:11:18.438 "percentage": 50, 00:11:18.438 "status": "finished", 00:11:18.438 "queue_depth": 1, 00:11:18.438 "io_size": 131072, 00:11:18.438 "runtime": 1.374188, 00:11:18.438 "iops": 8118.248740347027, 00:11:18.438 "mibps": 1014.7810925433783, 00:11:18.438 "io_failed": 0, 00:11:18.438 "io_timeout": 0, 00:11:18.438 "avg_latency_us": 120.50069941019069, 00:11:18.438 "min_latency_us": 24.258515283842794, 00:11:18.438 "max_latency_us": 1538.235807860262 00:11:18.438 } 00:11:18.438 ], 00:11:18.438 "core_count": 1 00:11:18.438 } 00:11:18.438 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.438 16:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86082 00:11:18.438 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 86082 ']' 00:11:18.438 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 86082 00:11:18.697 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:18.697 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.697 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86082 00:11:18.697 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:18.697 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:18.697 killing process with pid 86082 00:11:18.697 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86082' 00:11:18.697 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 86082 00:11:18.697 [2024-12-07 16:37:17.367059] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.697 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 86082 00:11:18.697 [2024-12-07 16:37:17.438085] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.956 16:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1oMOrVNMip 00:11:18.956 16:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:18.956 16:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:18.956 16:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:18.956 16:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:18.956 16:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:18.956 16:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:18.956 16:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:18.956 00:11:18.956 real 0m3.597s 00:11:18.956 user 0m4.320s 00:11:18.956 sys 0m0.737s 00:11:18.956 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.956 16:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.956 ************************************ 00:11:18.956 END TEST raid_read_error_test 00:11:18.956 ************************************ 00:11:19.216 16:37:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:19.216 16:37:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:19.216 16:37:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.216 16:37:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.216 ************************************ 00:11:19.216 START TEST raid_write_error_test 00:11:19.216 ************************************ 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.07a6jgznfi 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86211 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86211 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 86211 ']' 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.216 16:37:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.216 [2024-12-07 16:37:18.016605] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:19.216 [2024-12-07 16:37:18.016737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86211 ] 00:11:19.476 [2024-12-07 16:37:18.183842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.476 [2024-12-07 16:37:18.264450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.476 [2024-12-07 16:37:18.345888] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.476 [2024-12-07 16:37:18.345940] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.045 BaseBdev1_malloc 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.045 true 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.045 [2024-12-07 16:37:18.896068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:20.045 [2024-12-07 16:37:18.896131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.045 [2024-12-07 16:37:18.896160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:20.045 [2024-12-07 16:37:18.896177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.045 [2024-12-07 16:37:18.898646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.045 [2024-12-07 16:37:18.898680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:20.045 BaseBdev1 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.045 BaseBdev2_malloc 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.045 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.305 true 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.305 [2024-12-07 16:37:18.955094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:20.305 [2024-12-07 16:37:18.955157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.305 [2024-12-07 16:37:18.955179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:20.305 [2024-12-07 16:37:18.955188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.305 [2024-12-07 16:37:18.957758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.305 [2024-12-07 16:37:18.957793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:20.305 BaseBdev2 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.305 BaseBdev3_malloc 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.305 true 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.305 16:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:20.306 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.306 16:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.306 [2024-12-07 16:37:19.002874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:20.306 [2024-12-07 16:37:19.002932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.306 [2024-12-07 16:37:19.002958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:20.306 [2024-12-07 16:37:19.002968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.306 [2024-12-07 16:37:19.005551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.306 [2024-12-07 16:37:19.005586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:20.306 BaseBdev3 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.306 BaseBdev4_malloc 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.306 true 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.306 [2024-12-07 16:37:19.050626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:20.306 [2024-12-07 16:37:19.050683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.306 [2024-12-07 16:37:19.050710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:20.306 [2024-12-07 16:37:19.050720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.306 [2024-12-07 16:37:19.053321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.306 [2024-12-07 16:37:19.053366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:20.306 BaseBdev4 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.306 [2024-12-07 16:37:19.062669] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.306 [2024-12-07 16:37:19.064923] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.306 [2024-12-07 16:37:19.065019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.306 [2024-12-07 16:37:19.065075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:20.306 [2024-12-07 16:37:19.065294] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:20.306 [2024-12-07 16:37:19.065312] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:20.306 [2024-12-07 16:37:19.065598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:20.306 [2024-12-07 16:37:19.065760] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:20.306 [2024-12-07 16:37:19.065781] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:20.306 [2024-12-07 16:37:19.065919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.306 "name": "raid_bdev1", 00:11:20.306 "uuid": "b8b0051a-76be-43c0-8ad2-3dd1683ba457", 00:11:20.306 "strip_size_kb": 0, 00:11:20.306 "state": "online", 00:11:20.306 "raid_level": "raid1", 00:11:20.306 "superblock": true, 00:11:20.306 "num_base_bdevs": 4, 00:11:20.306 "num_base_bdevs_discovered": 4, 00:11:20.306 "num_base_bdevs_operational": 4, 00:11:20.306 "base_bdevs_list": [ 00:11:20.306 { 00:11:20.306 "name": "BaseBdev1", 00:11:20.306 "uuid": "3bfd7b92-2afc-5a56-9427-e9768253ead4", 00:11:20.306 "is_configured": true, 00:11:20.306 "data_offset": 2048, 00:11:20.306 "data_size": 63488 00:11:20.306 }, 00:11:20.306 { 00:11:20.306 "name": "BaseBdev2", 00:11:20.306 "uuid": "41a42e6f-b805-50d6-9138-7514b0379c29", 00:11:20.306 "is_configured": true, 00:11:20.306 "data_offset": 2048, 00:11:20.306 "data_size": 63488 00:11:20.306 }, 00:11:20.306 { 00:11:20.306 "name": "BaseBdev3", 00:11:20.306 "uuid": "846c6bfc-1bc6-5bdb-8ff0-648b7de99407", 00:11:20.306 "is_configured": true, 00:11:20.306 "data_offset": 2048, 00:11:20.306 "data_size": 63488 00:11:20.306 }, 00:11:20.306 { 00:11:20.306 "name": "BaseBdev4", 00:11:20.306 "uuid": "b7c81bd6-9855-5966-b907-9173cccf2724", 00:11:20.306 "is_configured": true, 00:11:20.306 "data_offset": 2048, 00:11:20.306 "data_size": 63488 00:11:20.306 } 00:11:20.306 ] 00:11:20.306 }' 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.306 16:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.877 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:20.877 16:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:20.877 [2024-12-07 16:37:19.614189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:21.814 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:21.814 16:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.814 16:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.814 [2024-12-07 16:37:20.539462] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:21.814 [2024-12-07 16:37:20.539529] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.814 [2024-12-07 16:37:20.539777] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:21.814 16:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.814 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:21.814 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:21.814 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:21.814 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:21.814 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:21.814 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.814 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.814 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.814 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.815 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.815 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.815 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.815 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.815 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.815 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.815 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.815 16:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.815 16:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.815 16:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.815 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.815 "name": "raid_bdev1", 00:11:21.815 "uuid": "b8b0051a-76be-43c0-8ad2-3dd1683ba457", 00:11:21.815 "strip_size_kb": 0, 00:11:21.815 "state": "online", 00:11:21.815 "raid_level": "raid1", 00:11:21.815 "superblock": true, 00:11:21.815 "num_base_bdevs": 4, 00:11:21.815 "num_base_bdevs_discovered": 3, 00:11:21.815 "num_base_bdevs_operational": 3, 00:11:21.815 "base_bdevs_list": [ 00:11:21.815 { 00:11:21.815 "name": null, 00:11:21.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.815 "is_configured": false, 00:11:21.815 "data_offset": 0, 00:11:21.815 "data_size": 63488 00:11:21.815 }, 00:11:21.815 { 00:11:21.815 "name": "BaseBdev2", 00:11:21.815 "uuid": "41a42e6f-b805-50d6-9138-7514b0379c29", 00:11:21.815 "is_configured": true, 00:11:21.815 "data_offset": 2048, 00:11:21.815 "data_size": 63488 00:11:21.815 }, 00:11:21.815 { 00:11:21.815 "name": "BaseBdev3", 00:11:21.815 "uuid": "846c6bfc-1bc6-5bdb-8ff0-648b7de99407", 00:11:21.815 "is_configured": true, 00:11:21.815 "data_offset": 2048, 00:11:21.815 "data_size": 63488 00:11:21.815 }, 00:11:21.815 { 00:11:21.815 "name": "BaseBdev4", 00:11:21.815 "uuid": "b7c81bd6-9855-5966-b907-9173cccf2724", 00:11:21.815 "is_configured": true, 00:11:21.815 "data_offset": 2048, 00:11:21.815 "data_size": 63488 00:11:21.815 } 00:11:21.815 ] 00:11:21.815 }' 00:11:21.815 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.815 16:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.392 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:22.392 16:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.392 16:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.392 [2024-12-07 16:37:20.989070] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.392 [2024-12-07 16:37:20.989119] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.392 [2024-12-07 16:37:20.991634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.392 [2024-12-07 16:37:20.991714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.392 [2024-12-07 16:37:20.991830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.392 [2024-12-07 16:37:20.991848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:22.392 { 00:11:22.392 "results": [ 00:11:22.392 { 00:11:22.392 "job": "raid_bdev1", 00:11:22.392 "core_mask": "0x1", 00:11:22.392 "workload": "randrw", 00:11:22.392 "percentage": 50, 00:11:22.392 "status": "finished", 00:11:22.392 "queue_depth": 1, 00:11:22.392 "io_size": 131072, 00:11:22.392 "runtime": 1.375308, 00:11:22.392 "iops": 9069.241217240065, 00:11:22.392 "mibps": 1133.6551521550082, 00:11:22.392 "io_failed": 0, 00:11:22.392 "io_timeout": 0, 00:11:22.392 "avg_latency_us": 107.71869704938211, 00:11:22.392 "min_latency_us": 22.46986899563319, 00:11:22.392 "max_latency_us": 1387.989519650655 00:11:22.392 } 00:11:22.392 ], 00:11:22.392 "core_count": 1 00:11:22.392 } 00:11:22.392 16:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.392 16:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86211 00:11:22.392 16:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 86211 ']' 00:11:22.392 16:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 86211 00:11:22.392 16:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:22.392 16:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:22.392 16:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86211 00:11:22.392 16:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:22.392 16:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:22.392 killing process with pid 86211 00:11:22.392 16:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86211' 00:11:22.393 16:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 86211 00:11:22.393 [2024-12-07 16:37:21.034056] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.393 16:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 86211 00:11:22.393 [2024-12-07 16:37:21.099175] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.651 16:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.07a6jgznfi 00:11:22.651 16:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:22.651 16:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:22.651 16:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:22.651 16:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:22.651 16:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.651 16:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:22.651 16:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:22.651 00:11:22.651 real 0m3.577s 00:11:22.651 user 0m4.297s 00:11:22.651 sys 0m0.724s 00:11:22.651 16:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.651 16:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.651 ************************************ 00:11:22.651 END TEST raid_write_error_test 00:11:22.651 ************************************ 00:11:22.651 16:37:21 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:22.651 16:37:21 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:22.651 16:37:21 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:22.651 16:37:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:22.651 16:37:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.651 16:37:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.910 ************************************ 00:11:22.910 START TEST raid_rebuild_test 00:11:22.910 ************************************ 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86349 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86349 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 86349 ']' 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.910 16:37:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.910 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:22.910 Zero copy mechanism will not be used. 00:11:22.911 [2024-12-07 16:37:21.649105] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:22.911 [2024-12-07 16:37:21.649223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86349 ] 00:11:23.169 [2024-12-07 16:37:21.810275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.169 [2024-12-07 16:37:21.881704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.169 [2024-12-07 16:37:21.960177] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.169 [2024-12-07 16:37:21.960228] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.739 BaseBdev1_malloc 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.739 [2024-12-07 16:37:22.516479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:23.739 [2024-12-07 16:37:22.516554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.739 [2024-12-07 16:37:22.516595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:23.739 [2024-12-07 16:37:22.516624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.739 [2024-12-07 16:37:22.519111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.739 [2024-12-07 16:37:22.519147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:23.739 BaseBdev1 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.739 BaseBdev2_malloc 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.739 [2024-12-07 16:37:22.559162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:23.739 [2024-12-07 16:37:22.559231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.739 [2024-12-07 16:37:22.559258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:23.739 [2024-12-07 16:37:22.559269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.739 [2024-12-07 16:37:22.561974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.739 [2024-12-07 16:37:22.562013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:23.739 BaseBdev2 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.739 spare_malloc 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.739 spare_delay 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.739 [2024-12-07 16:37:22.606159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:23.739 [2024-12-07 16:37:22.606217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.739 [2024-12-07 16:37:22.606241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:23.739 [2024-12-07 16:37:22.606250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.739 [2024-12-07 16:37:22.608691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.739 [2024-12-07 16:37:22.608726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:23.739 spare 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.739 [2024-12-07 16:37:22.618191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.739 [2024-12-07 16:37:22.620381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.739 [2024-12-07 16:37:22.620484] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:23.739 [2024-12-07 16:37:22.620495] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:23.739 [2024-12-07 16:37:22.620756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:23.739 [2024-12-07 16:37:22.620889] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:23.739 [2024-12-07 16:37:22.620916] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:23.739 [2024-12-07 16:37:22.621046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.739 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.999 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.999 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.999 "name": "raid_bdev1", 00:11:23.999 "uuid": "c6a9375c-e99d-4c57-9b59-29cabc976804", 00:11:23.999 "strip_size_kb": 0, 00:11:23.999 "state": "online", 00:11:23.999 "raid_level": "raid1", 00:11:23.999 "superblock": false, 00:11:23.999 "num_base_bdevs": 2, 00:11:23.999 "num_base_bdevs_discovered": 2, 00:11:23.999 "num_base_bdevs_operational": 2, 00:11:23.999 "base_bdevs_list": [ 00:11:23.999 { 00:11:23.999 "name": "BaseBdev1", 00:11:23.999 "uuid": "5d6dba3f-35d5-5636-beb8-21f83c53e49a", 00:11:23.999 "is_configured": true, 00:11:23.999 "data_offset": 0, 00:11:23.999 "data_size": 65536 00:11:23.999 }, 00:11:23.999 { 00:11:23.999 "name": "BaseBdev2", 00:11:23.999 "uuid": "b0e677db-ca37-5547-8715-00bee7e008ef", 00:11:23.999 "is_configured": true, 00:11:23.999 "data_offset": 0, 00:11:23.999 "data_size": 65536 00:11:23.999 } 00:11:23.999 ] 00:11:23.999 }' 00:11:23.999 16:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.999 16:37:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.258 16:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:24.258 16:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:24.258 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.259 [2024-12-07 16:37:23.069798] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:24.259 16:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:24.518 [2024-12-07 16:37:23.345021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:24.518 /dev/nbd0 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.518 1+0 records in 00:11:24.518 1+0 records out 00:11:24.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449895 s, 9.1 MB/s 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:24.518 16:37:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:24.776 16:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:24.776 16:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:24.776 16:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:28.955 65536+0 records in 00:11:28.955 65536+0 records out 00:11:28.955 33554432 bytes (34 MB, 32 MiB) copied, 3.737 s, 9.0 MB/s 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:28.955 [2024-12-07 16:37:27.349331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.955 [2024-12-07 16:37:27.385357] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:28.955 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.956 "name": "raid_bdev1", 00:11:28.956 "uuid": "c6a9375c-e99d-4c57-9b59-29cabc976804", 00:11:28.956 "strip_size_kb": 0, 00:11:28.956 "state": "online", 00:11:28.956 "raid_level": "raid1", 00:11:28.956 "superblock": false, 00:11:28.956 "num_base_bdevs": 2, 00:11:28.956 "num_base_bdevs_discovered": 1, 00:11:28.956 "num_base_bdevs_operational": 1, 00:11:28.956 "base_bdevs_list": [ 00:11:28.956 { 00:11:28.956 "name": null, 00:11:28.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.956 "is_configured": false, 00:11:28.956 "data_offset": 0, 00:11:28.956 "data_size": 65536 00:11:28.956 }, 00:11:28.956 { 00:11:28.956 "name": "BaseBdev2", 00:11:28.956 "uuid": "b0e677db-ca37-5547-8715-00bee7e008ef", 00:11:28.956 "is_configured": true, 00:11:28.956 "data_offset": 0, 00:11:28.956 "data_size": 65536 00:11:28.956 } 00:11:28.956 ] 00:11:28.956 }' 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.956 [2024-12-07 16:37:27.836636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:28.956 [2024-12-07 16:37:27.844242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.956 16:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:28.956 [2024-12-07 16:37:27.846531] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.326 "name": "raid_bdev1", 00:11:30.326 "uuid": "c6a9375c-e99d-4c57-9b59-29cabc976804", 00:11:30.326 "strip_size_kb": 0, 00:11:30.326 "state": "online", 00:11:30.326 "raid_level": "raid1", 00:11:30.326 "superblock": false, 00:11:30.326 "num_base_bdevs": 2, 00:11:30.326 "num_base_bdevs_discovered": 2, 00:11:30.326 "num_base_bdevs_operational": 2, 00:11:30.326 "process": { 00:11:30.326 "type": "rebuild", 00:11:30.326 "target": "spare", 00:11:30.326 "progress": { 00:11:30.326 "blocks": 20480, 00:11:30.326 "percent": 31 00:11:30.326 } 00:11:30.326 }, 00:11:30.326 "base_bdevs_list": [ 00:11:30.326 { 00:11:30.326 "name": "spare", 00:11:30.326 "uuid": "cb12a34c-7bef-542c-b88d-640593c49ebe", 00:11:30.326 "is_configured": true, 00:11:30.326 "data_offset": 0, 00:11:30.326 "data_size": 65536 00:11:30.326 }, 00:11:30.326 { 00:11:30.326 "name": "BaseBdev2", 00:11:30.326 "uuid": "b0e677db-ca37-5547-8715-00bee7e008ef", 00:11:30.326 "is_configured": true, 00:11:30.326 "data_offset": 0, 00:11:30.326 "data_size": 65536 00:11:30.326 } 00:11:30.326 ] 00:11:30.326 }' 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:30.326 16:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.326 [2024-12-07 16:37:29.010929] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:30.326 [2024-12-07 16:37:29.055781] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:30.326 [2024-12-07 16:37:29.055861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.326 [2024-12-07 16:37:29.055883] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:30.326 [2024-12-07 16:37:29.055892] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.326 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.327 16:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.327 16:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.327 16:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.327 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.327 "name": "raid_bdev1", 00:11:30.327 "uuid": "c6a9375c-e99d-4c57-9b59-29cabc976804", 00:11:30.327 "strip_size_kb": 0, 00:11:30.327 "state": "online", 00:11:30.327 "raid_level": "raid1", 00:11:30.327 "superblock": false, 00:11:30.327 "num_base_bdevs": 2, 00:11:30.327 "num_base_bdevs_discovered": 1, 00:11:30.327 "num_base_bdevs_operational": 1, 00:11:30.327 "base_bdevs_list": [ 00:11:30.327 { 00:11:30.327 "name": null, 00:11:30.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.327 "is_configured": false, 00:11:30.327 "data_offset": 0, 00:11:30.327 "data_size": 65536 00:11:30.327 }, 00:11:30.327 { 00:11:30.327 "name": "BaseBdev2", 00:11:30.327 "uuid": "b0e677db-ca37-5547-8715-00bee7e008ef", 00:11:30.327 "is_configured": true, 00:11:30.327 "data_offset": 0, 00:11:30.327 "data_size": 65536 00:11:30.327 } 00:11:30.327 ] 00:11:30.327 }' 00:11:30.327 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.327 16:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.892 "name": "raid_bdev1", 00:11:30.892 "uuid": "c6a9375c-e99d-4c57-9b59-29cabc976804", 00:11:30.892 "strip_size_kb": 0, 00:11:30.892 "state": "online", 00:11:30.892 "raid_level": "raid1", 00:11:30.892 "superblock": false, 00:11:30.892 "num_base_bdevs": 2, 00:11:30.892 "num_base_bdevs_discovered": 1, 00:11:30.892 "num_base_bdevs_operational": 1, 00:11:30.892 "base_bdevs_list": [ 00:11:30.892 { 00:11:30.892 "name": null, 00:11:30.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.892 "is_configured": false, 00:11:30.892 "data_offset": 0, 00:11:30.892 "data_size": 65536 00:11:30.892 }, 00:11:30.892 { 00:11:30.892 "name": "BaseBdev2", 00:11:30.892 "uuid": "b0e677db-ca37-5547-8715-00bee7e008ef", 00:11:30.892 "is_configured": true, 00:11:30.892 "data_offset": 0, 00:11:30.892 "data_size": 65536 00:11:30.892 } 00:11:30.892 ] 00:11:30.892 }' 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.892 [2024-12-07 16:37:29.702911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:30.892 [2024-12-07 16:37:29.710373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.892 16:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:30.892 [2024-12-07 16:37:29.712595] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:31.826 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:31.826 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.826 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:31.826 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:31.826 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.084 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.084 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.084 16:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.085 "name": "raid_bdev1", 00:11:32.085 "uuid": "c6a9375c-e99d-4c57-9b59-29cabc976804", 00:11:32.085 "strip_size_kb": 0, 00:11:32.085 "state": "online", 00:11:32.085 "raid_level": "raid1", 00:11:32.085 "superblock": false, 00:11:32.085 "num_base_bdevs": 2, 00:11:32.085 "num_base_bdevs_discovered": 2, 00:11:32.085 "num_base_bdevs_operational": 2, 00:11:32.085 "process": { 00:11:32.085 "type": "rebuild", 00:11:32.085 "target": "spare", 00:11:32.085 "progress": { 00:11:32.085 "blocks": 20480, 00:11:32.085 "percent": 31 00:11:32.085 } 00:11:32.085 }, 00:11:32.085 "base_bdevs_list": [ 00:11:32.085 { 00:11:32.085 "name": "spare", 00:11:32.085 "uuid": "cb12a34c-7bef-542c-b88d-640593c49ebe", 00:11:32.085 "is_configured": true, 00:11:32.085 "data_offset": 0, 00:11:32.085 "data_size": 65536 00:11:32.085 }, 00:11:32.085 { 00:11:32.085 "name": "BaseBdev2", 00:11:32.085 "uuid": "b0e677db-ca37-5547-8715-00bee7e008ef", 00:11:32.085 "is_configured": true, 00:11:32.085 "data_offset": 0, 00:11:32.085 "data_size": 65536 00:11:32.085 } 00:11:32.085 ] 00:11:32.085 }' 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=302 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.085 "name": "raid_bdev1", 00:11:32.085 "uuid": "c6a9375c-e99d-4c57-9b59-29cabc976804", 00:11:32.085 "strip_size_kb": 0, 00:11:32.085 "state": "online", 00:11:32.085 "raid_level": "raid1", 00:11:32.085 "superblock": false, 00:11:32.085 "num_base_bdevs": 2, 00:11:32.085 "num_base_bdevs_discovered": 2, 00:11:32.085 "num_base_bdevs_operational": 2, 00:11:32.085 "process": { 00:11:32.085 "type": "rebuild", 00:11:32.085 "target": "spare", 00:11:32.085 "progress": { 00:11:32.085 "blocks": 22528, 00:11:32.085 "percent": 34 00:11:32.085 } 00:11:32.085 }, 00:11:32.085 "base_bdevs_list": [ 00:11:32.085 { 00:11:32.085 "name": "spare", 00:11:32.085 "uuid": "cb12a34c-7bef-542c-b88d-640593c49ebe", 00:11:32.085 "is_configured": true, 00:11:32.085 "data_offset": 0, 00:11:32.085 "data_size": 65536 00:11:32.085 }, 00:11:32.085 { 00:11:32.085 "name": "BaseBdev2", 00:11:32.085 "uuid": "b0e677db-ca37-5547-8715-00bee7e008ef", 00:11:32.085 "is_configured": true, 00:11:32.085 "data_offset": 0, 00:11:32.085 "data_size": 65536 00:11:32.085 } 00:11:32.085 ] 00:11:32.085 }' 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:32.085 16:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:33.467 16:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:33.467 16:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:33.467 16:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:33.467 16:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:33.467 16:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:33.467 16:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:33.467 16:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.467 16:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.467 16:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.467 16:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.467 16:37:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.467 16:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:33.467 "name": "raid_bdev1", 00:11:33.467 "uuid": "c6a9375c-e99d-4c57-9b59-29cabc976804", 00:11:33.467 "strip_size_kb": 0, 00:11:33.467 "state": "online", 00:11:33.467 "raid_level": "raid1", 00:11:33.467 "superblock": false, 00:11:33.467 "num_base_bdevs": 2, 00:11:33.467 "num_base_bdevs_discovered": 2, 00:11:33.467 "num_base_bdevs_operational": 2, 00:11:33.467 "process": { 00:11:33.467 "type": "rebuild", 00:11:33.467 "target": "spare", 00:11:33.467 "progress": { 00:11:33.467 "blocks": 45056, 00:11:33.467 "percent": 68 00:11:33.467 } 00:11:33.467 }, 00:11:33.467 "base_bdevs_list": [ 00:11:33.467 { 00:11:33.467 "name": "spare", 00:11:33.467 "uuid": "cb12a34c-7bef-542c-b88d-640593c49ebe", 00:11:33.467 "is_configured": true, 00:11:33.467 "data_offset": 0, 00:11:33.467 "data_size": 65536 00:11:33.467 }, 00:11:33.467 { 00:11:33.467 "name": "BaseBdev2", 00:11:33.467 "uuid": "b0e677db-ca37-5547-8715-00bee7e008ef", 00:11:33.467 "is_configured": true, 00:11:33.467 "data_offset": 0, 00:11:33.467 "data_size": 65536 00:11:33.467 } 00:11:33.467 ] 00:11:33.467 }' 00:11:33.467 16:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:33.467 16:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:33.467 16:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:33.467 16:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:33.467 16:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:34.403 [2024-12-07 16:37:32.935540] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:34.403 [2024-12-07 16:37:32.935668] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:34.403 [2024-12-07 16:37:32.935723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.403 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:34.403 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:34.403 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.404 "name": "raid_bdev1", 00:11:34.404 "uuid": "c6a9375c-e99d-4c57-9b59-29cabc976804", 00:11:34.404 "strip_size_kb": 0, 00:11:34.404 "state": "online", 00:11:34.404 "raid_level": "raid1", 00:11:34.404 "superblock": false, 00:11:34.404 "num_base_bdevs": 2, 00:11:34.404 "num_base_bdevs_discovered": 2, 00:11:34.404 "num_base_bdevs_operational": 2, 00:11:34.404 "base_bdevs_list": [ 00:11:34.404 { 00:11:34.404 "name": "spare", 00:11:34.404 "uuid": "cb12a34c-7bef-542c-b88d-640593c49ebe", 00:11:34.404 "is_configured": true, 00:11:34.404 "data_offset": 0, 00:11:34.404 "data_size": 65536 00:11:34.404 }, 00:11:34.404 { 00:11:34.404 "name": "BaseBdev2", 00:11:34.404 "uuid": "b0e677db-ca37-5547-8715-00bee7e008ef", 00:11:34.404 "is_configured": true, 00:11:34.404 "data_offset": 0, 00:11:34.404 "data_size": 65536 00:11:34.404 } 00:11:34.404 ] 00:11:34.404 }' 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.404 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.663 "name": "raid_bdev1", 00:11:34.663 "uuid": "c6a9375c-e99d-4c57-9b59-29cabc976804", 00:11:34.663 "strip_size_kb": 0, 00:11:34.663 "state": "online", 00:11:34.663 "raid_level": "raid1", 00:11:34.663 "superblock": false, 00:11:34.663 "num_base_bdevs": 2, 00:11:34.663 "num_base_bdevs_discovered": 2, 00:11:34.663 "num_base_bdevs_operational": 2, 00:11:34.663 "base_bdevs_list": [ 00:11:34.663 { 00:11:34.663 "name": "spare", 00:11:34.663 "uuid": "cb12a34c-7bef-542c-b88d-640593c49ebe", 00:11:34.663 "is_configured": true, 00:11:34.663 "data_offset": 0, 00:11:34.663 "data_size": 65536 00:11:34.663 }, 00:11:34.663 { 00:11:34.663 "name": "BaseBdev2", 00:11:34.663 "uuid": "b0e677db-ca37-5547-8715-00bee7e008ef", 00:11:34.663 "is_configured": true, 00:11:34.663 "data_offset": 0, 00:11:34.663 "data_size": 65536 00:11:34.663 } 00:11:34.663 ] 00:11:34.663 }' 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.663 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.663 "name": "raid_bdev1", 00:11:34.663 "uuid": "c6a9375c-e99d-4c57-9b59-29cabc976804", 00:11:34.663 "strip_size_kb": 0, 00:11:34.663 "state": "online", 00:11:34.663 "raid_level": "raid1", 00:11:34.663 "superblock": false, 00:11:34.663 "num_base_bdevs": 2, 00:11:34.663 "num_base_bdevs_discovered": 2, 00:11:34.663 "num_base_bdevs_operational": 2, 00:11:34.663 "base_bdevs_list": [ 00:11:34.663 { 00:11:34.663 "name": "spare", 00:11:34.663 "uuid": "cb12a34c-7bef-542c-b88d-640593c49ebe", 00:11:34.663 "is_configured": true, 00:11:34.664 "data_offset": 0, 00:11:34.664 "data_size": 65536 00:11:34.664 }, 00:11:34.664 { 00:11:34.664 "name": "BaseBdev2", 00:11:34.664 "uuid": "b0e677db-ca37-5547-8715-00bee7e008ef", 00:11:34.664 "is_configured": true, 00:11:34.664 "data_offset": 0, 00:11:34.664 "data_size": 65536 00:11:34.664 } 00:11:34.664 ] 00:11:34.664 }' 00:11:34.664 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.664 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.233 [2024-12-07 16:37:33.858068] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:35.233 [2024-12-07 16:37:33.858112] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.233 [2024-12-07 16:37:33.858253] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.233 [2024-12-07 16:37:33.858350] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.233 [2024-12-07 16:37:33.858373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:35.233 16:37:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:35.492 /dev/nbd0 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:35.492 1+0 records in 00:11:35.492 1+0 records out 00:11:35.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517437 s, 7.9 MB/s 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:35.492 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:35.751 /dev/nbd1 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:35.751 1+0 records in 00:11:35.751 1+0 records out 00:11:35.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451747 s, 9.1 MB/s 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.751 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:36.010 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:36.010 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:36.010 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:36.010 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.010 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.010 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:36.010 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:36.010 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.010 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.010 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86349 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 86349 ']' 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 86349 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.269 16:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86349 00:11:36.269 16:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:36.269 16:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:36.269 16:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86349' 00:11:36.269 killing process with pid 86349 00:11:36.269 16:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 86349 00:11:36.269 Received shutdown signal, test time was about 60.000000 seconds 00:11:36.269 00:11:36.269 Latency(us) 00:11:36.269 [2024-12-07T16:37:35.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.269 [2024-12-07T16:37:35.168Z] =================================================================================================================== 00:11:36.269 [2024-12-07T16:37:35.168Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:36.269 [2024-12-07 16:37:35.012131] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.269 16:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 86349 00:11:36.269 [2024-12-07 16:37:35.071266] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:36.834 00:11:36.834 real 0m13.887s 00:11:36.834 user 0m15.945s 00:11:36.834 sys 0m3.027s 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.834 ************************************ 00:11:36.834 END TEST raid_rebuild_test 00:11:36.834 ************************************ 00:11:36.834 16:37:35 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:36.834 16:37:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:36.834 16:37:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.834 16:37:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:36.834 ************************************ 00:11:36.834 START TEST raid_rebuild_test_sb 00:11:36.834 ************************************ 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86751 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86751 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86751 ']' 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.834 16:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.835 16:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.835 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:36.835 Zero copy mechanism will not be used. 00:11:36.835 [2024-12-07 16:37:35.615475] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:36.835 [2024-12-07 16:37:35.615636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86751 ] 00:11:37.093 [2024-12-07 16:37:35.779622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.093 [2024-12-07 16:37:35.857194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.093 [2024-12-07 16:37:35.935529] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.093 [2024-12-07 16:37:35.935578] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.662 BaseBdev1_malloc 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.662 [2024-12-07 16:37:36.457130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:37.662 [2024-12-07 16:37:36.457220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.662 [2024-12-07 16:37:36.457250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:37.662 [2024-12-07 16:37:36.457268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.662 [2024-12-07 16:37:36.459747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.662 [2024-12-07 16:37:36.459784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:37.662 BaseBdev1 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.662 BaseBdev2_malloc 00:11:37.662 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.663 [2024-12-07 16:37:36.502051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:37.663 [2024-12-07 16:37:36.502113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.663 [2024-12-07 16:37:36.502138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:37.663 [2024-12-07 16:37:36.502147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.663 [2024-12-07 16:37:36.504630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.663 [2024-12-07 16:37:36.504665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:37.663 BaseBdev2 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.663 spare_malloc 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.663 spare_delay 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.663 [2024-12-07 16:37:36.549005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:37.663 [2024-12-07 16:37:36.549063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.663 [2024-12-07 16:37:36.549102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:37.663 [2024-12-07 16:37:36.549110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.663 [2024-12-07 16:37:36.551584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.663 [2024-12-07 16:37:36.551617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:37.663 spare 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.663 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.923 [2024-12-07 16:37:36.561042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.923 [2024-12-07 16:37:36.563242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.923 [2024-12-07 16:37:36.563410] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:37.923 [2024-12-07 16:37:36.563446] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:37.923 [2024-12-07 16:37:36.563709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:37.923 [2024-12-07 16:37:36.563886] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:37.923 [2024-12-07 16:37:36.563910] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:37.923 [2024-12-07 16:37:36.564044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.923 "name": "raid_bdev1", 00:11:37.923 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:37.923 "strip_size_kb": 0, 00:11:37.923 "state": "online", 00:11:37.923 "raid_level": "raid1", 00:11:37.923 "superblock": true, 00:11:37.923 "num_base_bdevs": 2, 00:11:37.923 "num_base_bdevs_discovered": 2, 00:11:37.923 "num_base_bdevs_operational": 2, 00:11:37.923 "base_bdevs_list": [ 00:11:37.923 { 00:11:37.923 "name": "BaseBdev1", 00:11:37.923 "uuid": "d09ca98d-0b95-51c7-910c-2d624af81a3e", 00:11:37.923 "is_configured": true, 00:11:37.923 "data_offset": 2048, 00:11:37.923 "data_size": 63488 00:11:37.923 }, 00:11:37.923 { 00:11:37.923 "name": "BaseBdev2", 00:11:37.923 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:37.923 "is_configured": true, 00:11:37.923 "data_offset": 2048, 00:11:37.923 "data_size": 63488 00:11:37.923 } 00:11:37.923 ] 00:11:37.923 }' 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.923 16:37:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.183 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.183 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.183 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.183 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:38.183 [2024-12-07 16:37:37.016690] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.183 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.183 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:38.183 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:38.183 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.183 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.183 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.183 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:38.442 [2024-12-07 16:37:37.263978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:38.442 /dev/nbd0 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.442 1+0 records in 00:11:38.442 1+0 records out 00:11:38.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496516 s, 8.2 MB/s 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:38.442 16:37:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:42.659 63488+0 records in 00:11:42.659 63488+0 records out 00:11:42.659 32505856 bytes (33 MB, 31 MiB) copied, 3.44205 s, 9.4 MB/s 00:11:42.659 16:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:42.659 16:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.660 16:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:42.660 16:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:42.660 16:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:42.660 16:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.660 16:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:42.660 16:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:42.660 [2024-12-07 16:37:41.000988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.660 16:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.660 [2024-12-07 16:37:41.013108] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.660 "name": "raid_bdev1", 00:11:42.660 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:42.660 "strip_size_kb": 0, 00:11:42.660 "state": "online", 00:11:42.660 "raid_level": "raid1", 00:11:42.660 "superblock": true, 00:11:42.660 "num_base_bdevs": 2, 00:11:42.660 "num_base_bdevs_discovered": 1, 00:11:42.660 "num_base_bdevs_operational": 1, 00:11:42.660 "base_bdevs_list": [ 00:11:42.660 { 00:11:42.660 "name": null, 00:11:42.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.660 "is_configured": false, 00:11:42.660 "data_offset": 0, 00:11:42.660 "data_size": 63488 00:11:42.660 }, 00:11:42.660 { 00:11:42.660 "name": "BaseBdev2", 00:11:42.660 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:42.660 "is_configured": true, 00:11:42.660 "data_offset": 2048, 00:11:42.660 "data_size": 63488 00:11:42.660 } 00:11:42.660 ] 00:11:42.660 }' 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.660 [2024-12-07 16:37:41.492260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:42.660 [2024-12-07 16:37:41.499646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.660 16:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:42.660 [2024-12-07 16:37:41.501783] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.040 "name": "raid_bdev1", 00:11:44.040 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:44.040 "strip_size_kb": 0, 00:11:44.040 "state": "online", 00:11:44.040 "raid_level": "raid1", 00:11:44.040 "superblock": true, 00:11:44.040 "num_base_bdevs": 2, 00:11:44.040 "num_base_bdevs_discovered": 2, 00:11:44.040 "num_base_bdevs_operational": 2, 00:11:44.040 "process": { 00:11:44.040 "type": "rebuild", 00:11:44.040 "target": "spare", 00:11:44.040 "progress": { 00:11:44.040 "blocks": 20480, 00:11:44.040 "percent": 32 00:11:44.040 } 00:11:44.040 }, 00:11:44.040 "base_bdevs_list": [ 00:11:44.040 { 00:11:44.040 "name": "spare", 00:11:44.040 "uuid": "7f03b186-2f9d-5862-be7e-7d004106056c", 00:11:44.040 "is_configured": true, 00:11:44.040 "data_offset": 2048, 00:11:44.040 "data_size": 63488 00:11:44.040 }, 00:11:44.040 { 00:11:44.040 "name": "BaseBdev2", 00:11:44.040 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:44.040 "is_configured": true, 00:11:44.040 "data_offset": 2048, 00:11:44.040 "data_size": 63488 00:11:44.040 } 00:11:44.040 ] 00:11:44.040 }' 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.040 [2024-12-07 16:37:42.641505] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:44.040 [2024-12-07 16:37:42.710153] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:44.040 [2024-12-07 16:37:42.710234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.040 [2024-12-07 16:37:42.710272] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:44.040 [2024-12-07 16:37:42.710281] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.040 16:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.041 16:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.041 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.041 16:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.041 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.041 "name": "raid_bdev1", 00:11:44.041 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:44.041 "strip_size_kb": 0, 00:11:44.041 "state": "online", 00:11:44.041 "raid_level": "raid1", 00:11:44.041 "superblock": true, 00:11:44.041 "num_base_bdevs": 2, 00:11:44.041 "num_base_bdevs_discovered": 1, 00:11:44.041 "num_base_bdevs_operational": 1, 00:11:44.041 "base_bdevs_list": [ 00:11:44.041 { 00:11:44.041 "name": null, 00:11:44.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.041 "is_configured": false, 00:11:44.041 "data_offset": 0, 00:11:44.041 "data_size": 63488 00:11:44.041 }, 00:11:44.041 { 00:11:44.041 "name": "BaseBdev2", 00:11:44.041 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:44.041 "is_configured": true, 00:11:44.041 "data_offset": 2048, 00:11:44.041 "data_size": 63488 00:11:44.041 } 00:11:44.041 ] 00:11:44.041 }' 00:11:44.041 16:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.041 16:37:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.300 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:44.300 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.300 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:44.300 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:44.300 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.300 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.300 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.300 16:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.300 16:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.560 16:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.560 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.560 "name": "raid_bdev1", 00:11:44.560 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:44.560 "strip_size_kb": 0, 00:11:44.560 "state": "online", 00:11:44.560 "raid_level": "raid1", 00:11:44.560 "superblock": true, 00:11:44.560 "num_base_bdevs": 2, 00:11:44.560 "num_base_bdevs_discovered": 1, 00:11:44.560 "num_base_bdevs_operational": 1, 00:11:44.560 "base_bdevs_list": [ 00:11:44.560 { 00:11:44.560 "name": null, 00:11:44.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.560 "is_configured": false, 00:11:44.560 "data_offset": 0, 00:11:44.560 "data_size": 63488 00:11:44.560 }, 00:11:44.560 { 00:11:44.560 "name": "BaseBdev2", 00:11:44.560 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:44.560 "is_configured": true, 00:11:44.560 "data_offset": 2048, 00:11:44.560 "data_size": 63488 00:11:44.560 } 00:11:44.560 ] 00:11:44.560 }' 00:11:44.560 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.560 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:44.560 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.560 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:44.560 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:44.560 16:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.560 16:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.560 [2024-12-07 16:37:43.325147] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:44.560 [2024-12-07 16:37:43.332509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:11:44.560 16:37:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.561 16:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:44.561 [2024-12-07 16:37:43.334700] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:45.499 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:45.499 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.499 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:45.499 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:45.499 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.499 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.499 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.499 16:37:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.499 16:37:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.499 16:37:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.499 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.499 "name": "raid_bdev1", 00:11:45.499 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:45.499 "strip_size_kb": 0, 00:11:45.499 "state": "online", 00:11:45.499 "raid_level": "raid1", 00:11:45.499 "superblock": true, 00:11:45.499 "num_base_bdevs": 2, 00:11:45.499 "num_base_bdevs_discovered": 2, 00:11:45.499 "num_base_bdevs_operational": 2, 00:11:45.499 "process": { 00:11:45.499 "type": "rebuild", 00:11:45.499 "target": "spare", 00:11:45.499 "progress": { 00:11:45.499 "blocks": 20480, 00:11:45.499 "percent": 32 00:11:45.499 } 00:11:45.499 }, 00:11:45.499 "base_bdevs_list": [ 00:11:45.499 { 00:11:45.499 "name": "spare", 00:11:45.499 "uuid": "7f03b186-2f9d-5862-be7e-7d004106056c", 00:11:45.499 "is_configured": true, 00:11:45.499 "data_offset": 2048, 00:11:45.499 "data_size": 63488 00:11:45.499 }, 00:11:45.499 { 00:11:45.499 "name": "BaseBdev2", 00:11:45.499 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:45.499 "is_configured": true, 00:11:45.499 "data_offset": 2048, 00:11:45.499 "data_size": 63488 00:11:45.499 } 00:11:45.499 ] 00:11:45.499 }' 00:11:45.499 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:45.759 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=316 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.759 "name": "raid_bdev1", 00:11:45.759 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:45.759 "strip_size_kb": 0, 00:11:45.759 "state": "online", 00:11:45.759 "raid_level": "raid1", 00:11:45.759 "superblock": true, 00:11:45.759 "num_base_bdevs": 2, 00:11:45.759 "num_base_bdevs_discovered": 2, 00:11:45.759 "num_base_bdevs_operational": 2, 00:11:45.759 "process": { 00:11:45.759 "type": "rebuild", 00:11:45.759 "target": "spare", 00:11:45.759 "progress": { 00:11:45.759 "blocks": 22528, 00:11:45.759 "percent": 35 00:11:45.759 } 00:11:45.759 }, 00:11:45.759 "base_bdevs_list": [ 00:11:45.759 { 00:11:45.759 "name": "spare", 00:11:45.759 "uuid": "7f03b186-2f9d-5862-be7e-7d004106056c", 00:11:45.759 "is_configured": true, 00:11:45.759 "data_offset": 2048, 00:11:45.759 "data_size": 63488 00:11:45.759 }, 00:11:45.759 { 00:11:45.759 "name": "BaseBdev2", 00:11:45.759 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:45.759 "is_configured": true, 00:11:45.759 "data_offset": 2048, 00:11:45.759 "data_size": 63488 00:11:45.759 } 00:11:45.759 ] 00:11:45.759 }' 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:45.759 16:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:47.138 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:47.138 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.139 "name": "raid_bdev1", 00:11:47.139 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:47.139 "strip_size_kb": 0, 00:11:47.139 "state": "online", 00:11:47.139 "raid_level": "raid1", 00:11:47.139 "superblock": true, 00:11:47.139 "num_base_bdevs": 2, 00:11:47.139 "num_base_bdevs_discovered": 2, 00:11:47.139 "num_base_bdevs_operational": 2, 00:11:47.139 "process": { 00:11:47.139 "type": "rebuild", 00:11:47.139 "target": "spare", 00:11:47.139 "progress": { 00:11:47.139 "blocks": 45056, 00:11:47.139 "percent": 70 00:11:47.139 } 00:11:47.139 }, 00:11:47.139 "base_bdevs_list": [ 00:11:47.139 { 00:11:47.139 "name": "spare", 00:11:47.139 "uuid": "7f03b186-2f9d-5862-be7e-7d004106056c", 00:11:47.139 "is_configured": true, 00:11:47.139 "data_offset": 2048, 00:11:47.139 "data_size": 63488 00:11:47.139 }, 00:11:47.139 { 00:11:47.139 "name": "BaseBdev2", 00:11:47.139 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:47.139 "is_configured": true, 00:11:47.139 "data_offset": 2048, 00:11:47.139 "data_size": 63488 00:11:47.139 } 00:11:47.139 ] 00:11:47.139 }' 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:47.139 16:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:47.705 [2024-12-07 16:37:46.455016] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:47.705 [2024-12-07 16:37:46.455121] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:47.705 [2024-12-07 16:37:46.455222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.964 "name": "raid_bdev1", 00:11:47.964 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:47.964 "strip_size_kb": 0, 00:11:47.964 "state": "online", 00:11:47.964 "raid_level": "raid1", 00:11:47.964 "superblock": true, 00:11:47.964 "num_base_bdevs": 2, 00:11:47.964 "num_base_bdevs_discovered": 2, 00:11:47.964 "num_base_bdevs_operational": 2, 00:11:47.964 "base_bdevs_list": [ 00:11:47.964 { 00:11:47.964 "name": "spare", 00:11:47.964 "uuid": "7f03b186-2f9d-5862-be7e-7d004106056c", 00:11:47.964 "is_configured": true, 00:11:47.964 "data_offset": 2048, 00:11:47.964 "data_size": 63488 00:11:47.964 }, 00:11:47.964 { 00:11:47.964 "name": "BaseBdev2", 00:11:47.964 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:47.964 "is_configured": true, 00:11:47.964 "data_offset": 2048, 00:11:47.964 "data_size": 63488 00:11:47.964 } 00:11:47.964 ] 00:11:47.964 }' 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:47.964 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.236 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:48.236 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:48.236 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:48.236 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.236 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:48.236 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:48.236 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.236 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.236 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.236 16:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.236 16:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.236 16:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.236 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.236 "name": "raid_bdev1", 00:11:48.236 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:48.236 "strip_size_kb": 0, 00:11:48.236 "state": "online", 00:11:48.236 "raid_level": "raid1", 00:11:48.236 "superblock": true, 00:11:48.236 "num_base_bdevs": 2, 00:11:48.236 "num_base_bdevs_discovered": 2, 00:11:48.236 "num_base_bdevs_operational": 2, 00:11:48.236 "base_bdevs_list": [ 00:11:48.236 { 00:11:48.236 "name": "spare", 00:11:48.236 "uuid": "7f03b186-2f9d-5862-be7e-7d004106056c", 00:11:48.236 "is_configured": true, 00:11:48.236 "data_offset": 2048, 00:11:48.236 "data_size": 63488 00:11:48.236 }, 00:11:48.236 { 00:11:48.236 "name": "BaseBdev2", 00:11:48.236 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:48.236 "is_configured": true, 00:11:48.237 "data_offset": 2048, 00:11:48.237 "data_size": 63488 00:11:48.237 } 00:11:48.237 ] 00:11:48.237 }' 00:11:48.237 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.237 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:48.237 16:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.237 "name": "raid_bdev1", 00:11:48.237 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:48.237 "strip_size_kb": 0, 00:11:48.237 "state": "online", 00:11:48.237 "raid_level": "raid1", 00:11:48.237 "superblock": true, 00:11:48.237 "num_base_bdevs": 2, 00:11:48.237 "num_base_bdevs_discovered": 2, 00:11:48.237 "num_base_bdevs_operational": 2, 00:11:48.237 "base_bdevs_list": [ 00:11:48.237 { 00:11:48.237 "name": "spare", 00:11:48.237 "uuid": "7f03b186-2f9d-5862-be7e-7d004106056c", 00:11:48.237 "is_configured": true, 00:11:48.237 "data_offset": 2048, 00:11:48.237 "data_size": 63488 00:11:48.237 }, 00:11:48.237 { 00:11:48.237 "name": "BaseBdev2", 00:11:48.237 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:48.237 "is_configured": true, 00:11:48.237 "data_offset": 2048, 00:11:48.237 "data_size": 63488 00:11:48.237 } 00:11:48.237 ] 00:11:48.237 }' 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.237 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.802 [2024-12-07 16:37:47.416874] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.802 [2024-12-07 16:37:47.416907] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.802 [2024-12-07 16:37:47.417018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.802 [2024-12-07 16:37:47.417101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.802 [2024-12-07 16:37:47.417116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:48.802 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:48.802 /dev/nbd0 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.060 1+0 records in 00:11:49.060 1+0 records out 00:11:49.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039308 s, 10.4 MB/s 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:49.060 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:49.060 /dev/nbd1 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.318 1+0 records in 00:11:49.318 1+0 records out 00:11:49.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309349 s, 13.2 MB/s 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:49.318 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:49.319 16:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:49.319 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:49.319 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:49.319 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:49.319 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:49.319 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:49.319 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.319 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:49.577 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:49.577 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:49.577 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:49.577 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.577 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.577 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:49.577 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:49.577 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.577 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.577 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.836 [2024-12-07 16:37:48.508128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:49.836 [2024-12-07 16:37:48.508232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.836 [2024-12-07 16:37:48.508273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:49.836 [2024-12-07 16:37:48.508311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.836 [2024-12-07 16:37:48.510797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.836 [2024-12-07 16:37:48.510887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:49.836 [2024-12-07 16:37:48.510984] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:49.836 [2024-12-07 16:37:48.511058] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:49.836 [2024-12-07 16:37:48.511197] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.836 spare 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.836 [2024-12-07 16:37:48.611108] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:49.836 [2024-12-07 16:37:48.611192] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:49.836 [2024-12-07 16:37:48.611527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:11:49.836 [2024-12-07 16:37:48.611726] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:49.836 [2024-12-07 16:37:48.611781] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:49.836 [2024-12-07 16:37:48.611990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:49.836 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.837 "name": "raid_bdev1", 00:11:49.837 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:49.837 "strip_size_kb": 0, 00:11:49.837 "state": "online", 00:11:49.837 "raid_level": "raid1", 00:11:49.837 "superblock": true, 00:11:49.837 "num_base_bdevs": 2, 00:11:49.837 "num_base_bdevs_discovered": 2, 00:11:49.837 "num_base_bdevs_operational": 2, 00:11:49.837 "base_bdevs_list": [ 00:11:49.837 { 00:11:49.837 "name": "spare", 00:11:49.837 "uuid": "7f03b186-2f9d-5862-be7e-7d004106056c", 00:11:49.837 "is_configured": true, 00:11:49.837 "data_offset": 2048, 00:11:49.837 "data_size": 63488 00:11:49.837 }, 00:11:49.837 { 00:11:49.837 "name": "BaseBdev2", 00:11:49.837 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:49.837 "is_configured": true, 00:11:49.837 "data_offset": 2048, 00:11:49.837 "data_size": 63488 00:11:49.837 } 00:11:49.837 ] 00:11:49.837 }' 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.837 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.096 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:50.096 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.096 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:50.096 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:50.096 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.096 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.096 16:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.096 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.096 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.356 16:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.356 "name": "raid_bdev1", 00:11:50.356 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:50.356 "strip_size_kb": 0, 00:11:50.356 "state": "online", 00:11:50.356 "raid_level": "raid1", 00:11:50.356 "superblock": true, 00:11:50.356 "num_base_bdevs": 2, 00:11:50.356 "num_base_bdevs_discovered": 2, 00:11:50.356 "num_base_bdevs_operational": 2, 00:11:50.356 "base_bdevs_list": [ 00:11:50.356 { 00:11:50.356 "name": "spare", 00:11:50.356 "uuid": "7f03b186-2f9d-5862-be7e-7d004106056c", 00:11:50.356 "is_configured": true, 00:11:50.356 "data_offset": 2048, 00:11:50.356 "data_size": 63488 00:11:50.356 }, 00:11:50.356 { 00:11:50.356 "name": "BaseBdev2", 00:11:50.356 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:50.356 "is_configured": true, 00:11:50.356 "data_offset": 2048, 00:11:50.356 "data_size": 63488 00:11:50.356 } 00:11:50.356 ] 00:11:50.356 }' 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.356 16:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.356 [2024-12-07 16:37:49.135173] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.357 "name": "raid_bdev1", 00:11:50.357 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:50.357 "strip_size_kb": 0, 00:11:50.357 "state": "online", 00:11:50.357 "raid_level": "raid1", 00:11:50.357 "superblock": true, 00:11:50.357 "num_base_bdevs": 2, 00:11:50.357 "num_base_bdevs_discovered": 1, 00:11:50.357 "num_base_bdevs_operational": 1, 00:11:50.357 "base_bdevs_list": [ 00:11:50.357 { 00:11:50.357 "name": null, 00:11:50.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.357 "is_configured": false, 00:11:50.357 "data_offset": 0, 00:11:50.357 "data_size": 63488 00:11:50.357 }, 00:11:50.357 { 00:11:50.357 "name": "BaseBdev2", 00:11:50.357 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:50.357 "is_configured": true, 00:11:50.357 "data_offset": 2048, 00:11:50.357 "data_size": 63488 00:11:50.357 } 00:11:50.357 ] 00:11:50.357 }' 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.357 16:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.946 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:50.946 16:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.946 16:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.947 [2024-12-07 16:37:49.582479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:50.947 [2024-12-07 16:37:49.582775] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:50.947 [2024-12-07 16:37:49.582839] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:50.947 [2024-12-07 16:37:49.582913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:50.947 [2024-12-07 16:37:49.590175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:11:50.947 16:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.947 16:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:50.947 [2024-12-07 16:37:49.592489] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.883 "name": "raid_bdev1", 00:11:51.883 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:51.883 "strip_size_kb": 0, 00:11:51.883 "state": "online", 00:11:51.883 "raid_level": "raid1", 00:11:51.883 "superblock": true, 00:11:51.883 "num_base_bdevs": 2, 00:11:51.883 "num_base_bdevs_discovered": 2, 00:11:51.883 "num_base_bdevs_operational": 2, 00:11:51.883 "process": { 00:11:51.883 "type": "rebuild", 00:11:51.883 "target": "spare", 00:11:51.883 "progress": { 00:11:51.883 "blocks": 20480, 00:11:51.883 "percent": 32 00:11:51.883 } 00:11:51.883 }, 00:11:51.883 "base_bdevs_list": [ 00:11:51.883 { 00:11:51.883 "name": "spare", 00:11:51.883 "uuid": "7f03b186-2f9d-5862-be7e-7d004106056c", 00:11:51.883 "is_configured": true, 00:11:51.883 "data_offset": 2048, 00:11:51.883 "data_size": 63488 00:11:51.883 }, 00:11:51.883 { 00:11:51.883 "name": "BaseBdev2", 00:11:51.883 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:51.883 "is_configured": true, 00:11:51.883 "data_offset": 2048, 00:11:51.883 "data_size": 63488 00:11:51.883 } 00:11:51.883 ] 00:11:51.883 }' 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.883 16:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.883 [2024-12-07 16:37:50.728529] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:52.142 [2024-12-07 16:37:50.800197] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:52.142 [2024-12-07 16:37:50.800268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.142 [2024-12-07 16:37:50.800288] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:52.142 [2024-12-07 16:37:50.800295] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.142 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.142 "name": "raid_bdev1", 00:11:52.142 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:52.142 "strip_size_kb": 0, 00:11:52.142 "state": "online", 00:11:52.142 "raid_level": "raid1", 00:11:52.142 "superblock": true, 00:11:52.142 "num_base_bdevs": 2, 00:11:52.142 "num_base_bdevs_discovered": 1, 00:11:52.143 "num_base_bdevs_operational": 1, 00:11:52.143 "base_bdevs_list": [ 00:11:52.143 { 00:11:52.143 "name": null, 00:11:52.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.143 "is_configured": false, 00:11:52.143 "data_offset": 0, 00:11:52.143 "data_size": 63488 00:11:52.143 }, 00:11:52.143 { 00:11:52.143 "name": "BaseBdev2", 00:11:52.143 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:52.143 "is_configured": true, 00:11:52.143 "data_offset": 2048, 00:11:52.143 "data_size": 63488 00:11:52.143 } 00:11:52.143 ] 00:11:52.143 }' 00:11:52.143 16:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.143 16:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.401 16:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:52.401 16:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.401 16:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.401 [2024-12-07 16:37:51.234996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:52.401 [2024-12-07 16:37:51.235120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.401 [2024-12-07 16:37:51.235167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:52.401 [2024-12-07 16:37:51.235226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.401 [2024-12-07 16:37:51.235762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.401 [2024-12-07 16:37:51.235822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:52.401 [2024-12-07 16:37:51.235942] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:52.401 [2024-12-07 16:37:51.235981] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:52.401 [2024-12-07 16:37:51.236028] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:52.401 [2024-12-07 16:37:51.236086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:52.401 [2024-12-07 16:37:51.242931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:11:52.401 spare 00:11:52.401 16:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.401 16:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:52.401 [2024-12-07 16:37:51.245226] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.778 "name": "raid_bdev1", 00:11:53.778 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:53.778 "strip_size_kb": 0, 00:11:53.778 "state": "online", 00:11:53.778 "raid_level": "raid1", 00:11:53.778 "superblock": true, 00:11:53.778 "num_base_bdevs": 2, 00:11:53.778 "num_base_bdevs_discovered": 2, 00:11:53.778 "num_base_bdevs_operational": 2, 00:11:53.778 "process": { 00:11:53.778 "type": "rebuild", 00:11:53.778 "target": "spare", 00:11:53.778 "progress": { 00:11:53.778 "blocks": 20480, 00:11:53.778 "percent": 32 00:11:53.778 } 00:11:53.778 }, 00:11:53.778 "base_bdevs_list": [ 00:11:53.778 { 00:11:53.778 "name": "spare", 00:11:53.778 "uuid": "7f03b186-2f9d-5862-be7e-7d004106056c", 00:11:53.778 "is_configured": true, 00:11:53.778 "data_offset": 2048, 00:11:53.778 "data_size": 63488 00:11:53.778 }, 00:11:53.778 { 00:11:53.778 "name": "BaseBdev2", 00:11:53.778 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:53.778 "is_configured": true, 00:11:53.778 "data_offset": 2048, 00:11:53.778 "data_size": 63488 00:11:53.778 } 00:11:53.778 ] 00:11:53.778 }' 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.778 [2024-12-07 16:37:52.409268] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.778 [2024-12-07 16:37:52.452966] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:53.778 [2024-12-07 16:37:52.453095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.778 [2024-12-07 16:37:52.453112] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.778 [2024-12-07 16:37:52.453122] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.778 "name": "raid_bdev1", 00:11:53.778 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:53.778 "strip_size_kb": 0, 00:11:53.778 "state": "online", 00:11:53.778 "raid_level": "raid1", 00:11:53.778 "superblock": true, 00:11:53.778 "num_base_bdevs": 2, 00:11:53.778 "num_base_bdevs_discovered": 1, 00:11:53.778 "num_base_bdevs_operational": 1, 00:11:53.778 "base_bdevs_list": [ 00:11:53.778 { 00:11:53.778 "name": null, 00:11:53.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.778 "is_configured": false, 00:11:53.778 "data_offset": 0, 00:11:53.778 "data_size": 63488 00:11:53.778 }, 00:11:53.778 { 00:11:53.778 "name": "BaseBdev2", 00:11:53.778 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:53.778 "is_configured": true, 00:11:53.778 "data_offset": 2048, 00:11:53.778 "data_size": 63488 00:11:53.778 } 00:11:53.778 ] 00:11:53.778 }' 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.778 16:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.037 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:54.037 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.037 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:54.037 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:54.037 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.037 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.037 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.037 16:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.037 16:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.037 16:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.037 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.037 "name": "raid_bdev1", 00:11:54.037 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:54.037 "strip_size_kb": 0, 00:11:54.037 "state": "online", 00:11:54.037 "raid_level": "raid1", 00:11:54.037 "superblock": true, 00:11:54.037 "num_base_bdevs": 2, 00:11:54.037 "num_base_bdevs_discovered": 1, 00:11:54.037 "num_base_bdevs_operational": 1, 00:11:54.037 "base_bdevs_list": [ 00:11:54.037 { 00:11:54.037 "name": null, 00:11:54.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.037 "is_configured": false, 00:11:54.037 "data_offset": 0, 00:11:54.037 "data_size": 63488 00:11:54.037 }, 00:11:54.037 { 00:11:54.037 "name": "BaseBdev2", 00:11:54.037 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:54.037 "is_configured": true, 00:11:54.037 "data_offset": 2048, 00:11:54.037 "data_size": 63488 00:11:54.037 } 00:11:54.037 ] 00:11:54.037 }' 00:11:54.037 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.296 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:54.296 16:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.296 16:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:54.296 16:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:54.296 16:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.296 16:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.296 16:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.296 16:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:54.296 16:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.296 16:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.296 [2024-12-07 16:37:53.027556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:54.296 [2024-12-07 16:37:53.027615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.296 [2024-12-07 16:37:53.027637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:54.296 [2024-12-07 16:37:53.027650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.296 [2024-12-07 16:37:53.028106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.296 [2024-12-07 16:37:53.028132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:54.296 [2024-12-07 16:37:53.028212] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:54.296 [2024-12-07 16:37:53.028233] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:54.296 [2024-12-07 16:37:53.028242] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:54.296 [2024-12-07 16:37:53.028267] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:54.296 BaseBdev1 00:11:54.296 16:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.296 16:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.235 "name": "raid_bdev1", 00:11:55.235 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:55.235 "strip_size_kb": 0, 00:11:55.235 "state": "online", 00:11:55.235 "raid_level": "raid1", 00:11:55.235 "superblock": true, 00:11:55.235 "num_base_bdevs": 2, 00:11:55.235 "num_base_bdevs_discovered": 1, 00:11:55.235 "num_base_bdevs_operational": 1, 00:11:55.235 "base_bdevs_list": [ 00:11:55.235 { 00:11:55.235 "name": null, 00:11:55.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.235 "is_configured": false, 00:11:55.235 "data_offset": 0, 00:11:55.235 "data_size": 63488 00:11:55.235 }, 00:11:55.235 { 00:11:55.235 "name": "BaseBdev2", 00:11:55.235 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:55.235 "is_configured": true, 00:11:55.235 "data_offset": 2048, 00:11:55.235 "data_size": 63488 00:11:55.235 } 00:11:55.235 ] 00:11:55.235 }' 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.235 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.805 "name": "raid_bdev1", 00:11:55.805 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:55.805 "strip_size_kb": 0, 00:11:55.805 "state": "online", 00:11:55.805 "raid_level": "raid1", 00:11:55.805 "superblock": true, 00:11:55.805 "num_base_bdevs": 2, 00:11:55.805 "num_base_bdevs_discovered": 1, 00:11:55.805 "num_base_bdevs_operational": 1, 00:11:55.805 "base_bdevs_list": [ 00:11:55.805 { 00:11:55.805 "name": null, 00:11:55.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.805 "is_configured": false, 00:11:55.805 "data_offset": 0, 00:11:55.805 "data_size": 63488 00:11:55.805 }, 00:11:55.805 { 00:11:55.805 "name": "BaseBdev2", 00:11:55.805 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:55.805 "is_configured": true, 00:11:55.805 "data_offset": 2048, 00:11:55.805 "data_size": 63488 00:11:55.805 } 00:11:55.805 ] 00:11:55.805 }' 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.805 [2024-12-07 16:37:54.625048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.805 [2024-12-07 16:37:54.625263] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:55.805 [2024-12-07 16:37:54.625277] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:55.805 request: 00:11:55.805 { 00:11:55.805 "base_bdev": "BaseBdev1", 00:11:55.805 "raid_bdev": "raid_bdev1", 00:11:55.805 "method": "bdev_raid_add_base_bdev", 00:11:55.805 "req_id": 1 00:11:55.805 } 00:11:55.805 Got JSON-RPC error response 00:11:55.805 response: 00:11:55.805 { 00:11:55.805 "code": -22, 00:11:55.805 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:55.805 } 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:55.805 16:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.184 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.184 "name": "raid_bdev1", 00:11:57.184 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:57.184 "strip_size_kb": 0, 00:11:57.184 "state": "online", 00:11:57.185 "raid_level": "raid1", 00:11:57.185 "superblock": true, 00:11:57.185 "num_base_bdevs": 2, 00:11:57.185 "num_base_bdevs_discovered": 1, 00:11:57.185 "num_base_bdevs_operational": 1, 00:11:57.185 "base_bdevs_list": [ 00:11:57.185 { 00:11:57.185 "name": null, 00:11:57.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.185 "is_configured": false, 00:11:57.185 "data_offset": 0, 00:11:57.185 "data_size": 63488 00:11:57.185 }, 00:11:57.185 { 00:11:57.185 "name": "BaseBdev2", 00:11:57.185 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:57.185 "is_configured": true, 00:11:57.185 "data_offset": 2048, 00:11:57.185 "data_size": 63488 00:11:57.185 } 00:11:57.185 ] 00:11:57.185 }' 00:11:57.185 16:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.185 16:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.445 "name": "raid_bdev1", 00:11:57.445 "uuid": "b6272840-69d6-4641-b1c4-21da767f6431", 00:11:57.445 "strip_size_kb": 0, 00:11:57.445 "state": "online", 00:11:57.445 "raid_level": "raid1", 00:11:57.445 "superblock": true, 00:11:57.445 "num_base_bdevs": 2, 00:11:57.445 "num_base_bdevs_discovered": 1, 00:11:57.445 "num_base_bdevs_operational": 1, 00:11:57.445 "base_bdevs_list": [ 00:11:57.445 { 00:11:57.445 "name": null, 00:11:57.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.445 "is_configured": false, 00:11:57.445 "data_offset": 0, 00:11:57.445 "data_size": 63488 00:11:57.445 }, 00:11:57.445 { 00:11:57.445 "name": "BaseBdev2", 00:11:57.445 "uuid": "24ae8cb8-db6d-5d4a-9fa2-3039eb648377", 00:11:57.445 "is_configured": true, 00:11:57.445 "data_offset": 2048, 00:11:57.445 "data_size": 63488 00:11:57.445 } 00:11:57.445 ] 00:11:57.445 }' 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86751 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86751 ']' 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86751 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86751 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86751' 00:11:57.445 killing process with pid 86751 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86751 00:11:57.445 Received shutdown signal, test time was about 60.000000 seconds 00:11:57.445 00:11:57.445 Latency(us) 00:11:57.445 [2024-12-07T16:37:56.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.445 [2024-12-07T16:37:56.344Z] =================================================================================================================== 00:11:57.445 [2024-12-07T16:37:56.344Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:57.445 [2024-12-07 16:37:56.297241] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.445 [2024-12-07 16:37:56.297410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.445 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86751 00:11:57.445 [2024-12-07 16:37:56.297473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.445 [2024-12-07 16:37:56.297483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:11:57.705 [2024-12-07 16:37:56.355912] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.964 16:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:57.965 00:11:57.965 real 0m21.197s 00:11:57.965 user 0m26.168s 00:11:57.965 sys 0m3.572s 00:11:57.965 ************************************ 00:11:57.965 END TEST raid_rebuild_test_sb 00:11:57.965 ************************************ 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.965 16:37:56 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:57.965 16:37:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:57.965 16:37:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.965 16:37:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:57.965 ************************************ 00:11:57.965 START TEST raid_rebuild_test_io 00:11:57.965 ************************************ 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87462 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87462 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87462 ']' 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.965 16:37:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.225 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:58.225 Zero copy mechanism will not be used. 00:11:58.225 [2024-12-07 16:37:56.881795] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:58.225 [2024-12-07 16:37:56.881941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87462 ] 00:11:58.225 [2024-12-07 16:37:57.047109] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.225 [2024-12-07 16:37:57.120449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.485 [2024-12-07 16:37:57.196965] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.485 [2024-12-07 16:37:57.197010] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.055 BaseBdev1_malloc 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.055 [2024-12-07 16:37:57.744411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:59.055 [2024-12-07 16:37:57.744503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.055 [2024-12-07 16:37:57.744542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:59.055 [2024-12-07 16:37:57.744567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.055 [2024-12-07 16:37:57.747224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.055 [2024-12-07 16:37:57.747261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:59.055 BaseBdev1 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.055 BaseBdev2_malloc 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.055 [2024-12-07 16:37:57.790120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:59.055 [2024-12-07 16:37:57.790195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.055 [2024-12-07 16:37:57.790222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:59.055 [2024-12-07 16:37:57.790232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.055 [2024-12-07 16:37:57.792839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.055 [2024-12-07 16:37:57.792875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:59.055 BaseBdev2 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.055 spare_malloc 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.055 spare_delay 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.055 [2024-12-07 16:37:57.837264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:59.055 [2024-12-07 16:37:57.837322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.055 [2024-12-07 16:37:57.837358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:59.055 [2024-12-07 16:37:57.837367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.055 [2024-12-07 16:37:57.839817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.055 [2024-12-07 16:37:57.839852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:59.055 spare 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.055 [2024-12-07 16:37:57.849279] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.055 [2024-12-07 16:37:57.851461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.055 [2024-12-07 16:37:57.851551] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:59.055 [2024-12-07 16:37:57.851563] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:59.055 [2024-12-07 16:37:57.851823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:59.055 [2024-12-07 16:37:57.851960] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:59.055 [2024-12-07 16:37:57.851980] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:59.055 [2024-12-07 16:37:57.852111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.055 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.056 "name": "raid_bdev1", 00:11:59.056 "uuid": "fede7518-77ef-4960-9a4e-6b89a66a437e", 00:11:59.056 "strip_size_kb": 0, 00:11:59.056 "state": "online", 00:11:59.056 "raid_level": "raid1", 00:11:59.056 "superblock": false, 00:11:59.056 "num_base_bdevs": 2, 00:11:59.056 "num_base_bdevs_discovered": 2, 00:11:59.056 "num_base_bdevs_operational": 2, 00:11:59.056 "base_bdevs_list": [ 00:11:59.056 { 00:11:59.056 "name": "BaseBdev1", 00:11:59.056 "uuid": "4a3e2d91-eb3b-58ce-8ef2-2d7774892a03", 00:11:59.056 "is_configured": true, 00:11:59.056 "data_offset": 0, 00:11:59.056 "data_size": 65536 00:11:59.056 }, 00:11:59.056 { 00:11:59.056 "name": "BaseBdev2", 00:11:59.056 "uuid": "1216ac39-f2ea-593a-97f6-50181a751f78", 00:11:59.056 "is_configured": true, 00:11:59.056 "data_offset": 0, 00:11:59.056 "data_size": 65536 00:11:59.056 } 00:11:59.056 ] 00:11:59.056 }' 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.056 16:37:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.650 [2024-12-07 16:37:58.288893] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.650 [2024-12-07 16:37:58.404417] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.650 "name": "raid_bdev1", 00:11:59.650 "uuid": "fede7518-77ef-4960-9a4e-6b89a66a437e", 00:11:59.650 "strip_size_kb": 0, 00:11:59.650 "state": "online", 00:11:59.650 "raid_level": "raid1", 00:11:59.650 "superblock": false, 00:11:59.650 "num_base_bdevs": 2, 00:11:59.650 "num_base_bdevs_discovered": 1, 00:11:59.650 "num_base_bdevs_operational": 1, 00:11:59.650 "base_bdevs_list": [ 00:11:59.650 { 00:11:59.650 "name": null, 00:11:59.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.650 "is_configured": false, 00:11:59.650 "data_offset": 0, 00:11:59.650 "data_size": 65536 00:11:59.650 }, 00:11:59.650 { 00:11:59.650 "name": "BaseBdev2", 00:11:59.650 "uuid": "1216ac39-f2ea-593a-97f6-50181a751f78", 00:11:59.650 "is_configured": true, 00:11:59.650 "data_offset": 0, 00:11:59.650 "data_size": 65536 00:11:59.650 } 00:11:59.650 ] 00:11:59.650 }' 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.650 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.650 [2024-12-07 16:37:58.487737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:59.650 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:59.650 Zero copy mechanism will not be used. 00:11:59.650 Running I/O for 60 seconds... 00:12:00.220 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:00.220 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.220 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.220 [2024-12-07 16:37:58.876915] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:00.220 16:37:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.220 16:37:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:00.220 [2024-12-07 16:37:58.936375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:00.220 [2024-12-07 16:37:58.938669] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:00.221 [2024-12-07 16:37:59.046844] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:00.221 [2024-12-07 16:37:59.047600] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:00.480 [2024-12-07 16:37:59.262295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:00.480 [2024-12-07 16:37:59.262789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:00.740 201.00 IOPS, 603.00 MiB/s [2024-12-07T16:37:59.639Z] [2024-12-07 16:37:59.605835] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:01.000 [2024-12-07 16:37:59.832087] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:01.000 [2024-12-07 16:37:59.832414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:01.260 16:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.260 16:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.260 16:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.260 16:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.260 16:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.260 16:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.260 16:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.260 16:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.260 16:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.260 16:37:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.260 16:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.260 "name": "raid_bdev1", 00:12:01.260 "uuid": "fede7518-77ef-4960-9a4e-6b89a66a437e", 00:12:01.260 "strip_size_kb": 0, 00:12:01.260 "state": "online", 00:12:01.260 "raid_level": "raid1", 00:12:01.260 "superblock": false, 00:12:01.260 "num_base_bdevs": 2, 00:12:01.260 "num_base_bdevs_discovered": 2, 00:12:01.260 "num_base_bdevs_operational": 2, 00:12:01.260 "process": { 00:12:01.260 "type": "rebuild", 00:12:01.260 "target": "spare", 00:12:01.260 "progress": { 00:12:01.260 "blocks": 10240, 00:12:01.260 "percent": 15 00:12:01.260 } 00:12:01.260 }, 00:12:01.260 "base_bdevs_list": [ 00:12:01.260 { 00:12:01.260 "name": "spare", 00:12:01.260 "uuid": "1353958b-f307-5917-b3aa-3a596dcf25d7", 00:12:01.260 "is_configured": true, 00:12:01.260 "data_offset": 0, 00:12:01.260 "data_size": 65536 00:12:01.260 }, 00:12:01.260 { 00:12:01.260 "name": "BaseBdev2", 00:12:01.260 "uuid": "1216ac39-f2ea-593a-97f6-50181a751f78", 00:12:01.260 "is_configured": true, 00:12:01.260 "data_offset": 0, 00:12:01.260 "data_size": 65536 00:12:01.260 } 00:12:01.260 ] 00:12:01.260 }' 00:12:01.260 16:37:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.260 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.260 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.260 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.260 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:01.260 16:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.260 16:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.260 [2024-12-07 16:38:00.069922] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.521 [2024-12-07 16:38:00.166992] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:01.521 [2024-12-07 16:38:00.279818] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:01.521 [2024-12-07 16:38:00.288766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.521 [2024-12-07 16:38:00.288825] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.521 [2024-12-07 16:38:00.288840] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:01.521 [2024-12-07 16:38:00.309690] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.521 "name": "raid_bdev1", 00:12:01.521 "uuid": "fede7518-77ef-4960-9a4e-6b89a66a437e", 00:12:01.521 "strip_size_kb": 0, 00:12:01.521 "state": "online", 00:12:01.521 "raid_level": "raid1", 00:12:01.521 "superblock": false, 00:12:01.521 "num_base_bdevs": 2, 00:12:01.521 "num_base_bdevs_discovered": 1, 00:12:01.521 "num_base_bdevs_operational": 1, 00:12:01.521 "base_bdevs_list": [ 00:12:01.521 { 00:12:01.521 "name": null, 00:12:01.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.521 "is_configured": false, 00:12:01.521 "data_offset": 0, 00:12:01.521 "data_size": 65536 00:12:01.521 }, 00:12:01.521 { 00:12:01.521 "name": "BaseBdev2", 00:12:01.521 "uuid": "1216ac39-f2ea-593a-97f6-50181a751f78", 00:12:01.521 "is_configured": true, 00:12:01.521 "data_offset": 0, 00:12:01.521 "data_size": 65536 00:12:01.521 } 00:12:01.521 ] 00:12:01.521 }' 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.521 16:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.042 149.50 IOPS, 448.50 MiB/s [2024-12-07T16:38:00.941Z] 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.042 "name": "raid_bdev1", 00:12:02.042 "uuid": "fede7518-77ef-4960-9a4e-6b89a66a437e", 00:12:02.042 "strip_size_kb": 0, 00:12:02.042 "state": "online", 00:12:02.042 "raid_level": "raid1", 00:12:02.042 "superblock": false, 00:12:02.042 "num_base_bdevs": 2, 00:12:02.042 "num_base_bdevs_discovered": 1, 00:12:02.042 "num_base_bdevs_operational": 1, 00:12:02.042 "base_bdevs_list": [ 00:12:02.042 { 00:12:02.042 "name": null, 00:12:02.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.042 "is_configured": false, 00:12:02.042 "data_offset": 0, 00:12:02.042 "data_size": 65536 00:12:02.042 }, 00:12:02.042 { 00:12:02.042 "name": "BaseBdev2", 00:12:02.042 "uuid": "1216ac39-f2ea-593a-97f6-50181a751f78", 00:12:02.042 "is_configured": true, 00:12:02.042 "data_offset": 0, 00:12:02.042 "data_size": 65536 00:12:02.042 } 00:12:02.042 ] 00:12:02.042 }' 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.042 16:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.042 [2024-12-07 16:38:00.922449] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:02.302 16:38:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.302 16:38:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:02.302 [2024-12-07 16:38:00.965270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:02.302 [2024-12-07 16:38:00.967571] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:02.302 [2024-12-07 16:38:01.092640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:02.302 [2024-12-07 16:38:01.093334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:02.561 [2024-12-07 16:38:01.306819] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:02.561 [2024-12-07 16:38:01.307300] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:02.821 160.33 IOPS, 481.00 MiB/s [2024-12-07T16:38:01.720Z] [2024-12-07 16:38:01.645238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:03.081 [2024-12-07 16:38:01.766560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:03.081 16:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:03.081 16:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.081 16:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:03.081 16:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:03.081 16:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.081 16:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.081 16:38:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.081 16:38:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.081 16:38:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.081 16:38:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.341 "name": "raid_bdev1", 00:12:03.341 "uuid": "fede7518-77ef-4960-9a4e-6b89a66a437e", 00:12:03.341 "strip_size_kb": 0, 00:12:03.341 "state": "online", 00:12:03.341 "raid_level": "raid1", 00:12:03.341 "superblock": false, 00:12:03.341 "num_base_bdevs": 2, 00:12:03.341 "num_base_bdevs_discovered": 2, 00:12:03.341 "num_base_bdevs_operational": 2, 00:12:03.341 "process": { 00:12:03.341 "type": "rebuild", 00:12:03.341 "target": "spare", 00:12:03.341 "progress": { 00:12:03.341 "blocks": 12288, 00:12:03.341 "percent": 18 00:12:03.341 } 00:12:03.341 }, 00:12:03.341 "base_bdevs_list": [ 00:12:03.341 { 00:12:03.341 "name": "spare", 00:12:03.341 "uuid": "1353958b-f307-5917-b3aa-3a596dcf25d7", 00:12:03.341 "is_configured": true, 00:12:03.341 "data_offset": 0, 00:12:03.341 "data_size": 65536 00:12:03.341 }, 00:12:03.341 { 00:12:03.341 "name": "BaseBdev2", 00:12:03.341 "uuid": "1216ac39-f2ea-593a-97f6-50181a751f78", 00:12:03.341 "is_configured": true, 00:12:03.341 "data_offset": 0, 00:12:03.341 "data_size": 65536 00:12:03.341 } 00:12:03.341 ] 00:12:03.341 }' 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=334 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.341 [2024-12-07 16:38:02.120518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.341 "name": "raid_bdev1", 00:12:03.341 "uuid": "fede7518-77ef-4960-9a4e-6b89a66a437e", 00:12:03.341 "strip_size_kb": 0, 00:12:03.341 "state": "online", 00:12:03.341 "raid_level": "raid1", 00:12:03.341 "superblock": false, 00:12:03.341 "num_base_bdevs": 2, 00:12:03.341 "num_base_bdevs_discovered": 2, 00:12:03.341 "num_base_bdevs_operational": 2, 00:12:03.341 "process": { 00:12:03.341 "type": "rebuild", 00:12:03.341 "target": "spare", 00:12:03.341 "progress": { 00:12:03.341 "blocks": 14336, 00:12:03.341 "percent": 21 00:12:03.341 } 00:12:03.341 }, 00:12:03.341 "base_bdevs_list": [ 00:12:03.341 { 00:12:03.341 "name": "spare", 00:12:03.341 "uuid": "1353958b-f307-5917-b3aa-3a596dcf25d7", 00:12:03.341 "is_configured": true, 00:12:03.341 "data_offset": 0, 00:12:03.341 "data_size": 65536 00:12:03.341 }, 00:12:03.341 { 00:12:03.341 "name": "BaseBdev2", 00:12:03.341 "uuid": "1216ac39-f2ea-593a-97f6-50181a751f78", 00:12:03.341 "is_configured": true, 00:12:03.341 "data_offset": 0, 00:12:03.341 "data_size": 65536 00:12:03.341 } 00:12:03.341 ] 00:12:03.341 }' 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:03.341 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.600 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.600 16:38:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:03.600 [2024-12-07 16:38:02.471209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:03.859 151.75 IOPS, 455.25 MiB/s [2024-12-07T16:38:02.758Z] [2024-12-07 16:38:02.580225] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:04.119 [2024-12-07 16:38:02.819102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:04.379 [2024-12-07 16:38:03.255646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:04.379 [2024-12-07 16:38:03.256135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:04.379 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:04.379 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.379 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.379 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.379 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.379 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.379 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.379 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.379 16:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.379 16:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.638 16:38:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.638 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.638 "name": "raid_bdev1", 00:12:04.638 "uuid": "fede7518-77ef-4960-9a4e-6b89a66a437e", 00:12:04.638 "strip_size_kb": 0, 00:12:04.638 "state": "online", 00:12:04.638 "raid_level": "raid1", 00:12:04.638 "superblock": false, 00:12:04.638 "num_base_bdevs": 2, 00:12:04.638 "num_base_bdevs_discovered": 2, 00:12:04.638 "num_base_bdevs_operational": 2, 00:12:04.638 "process": { 00:12:04.638 "type": "rebuild", 00:12:04.638 "target": "spare", 00:12:04.638 "progress": { 00:12:04.638 "blocks": 32768, 00:12:04.638 "percent": 50 00:12:04.638 } 00:12:04.638 }, 00:12:04.638 "base_bdevs_list": [ 00:12:04.638 { 00:12:04.638 "name": "spare", 00:12:04.638 "uuid": "1353958b-f307-5917-b3aa-3a596dcf25d7", 00:12:04.638 "is_configured": true, 00:12:04.638 "data_offset": 0, 00:12:04.638 "data_size": 65536 00:12:04.638 }, 00:12:04.638 { 00:12:04.638 "name": "BaseBdev2", 00:12:04.638 "uuid": "1216ac39-f2ea-593a-97f6-50181a751f78", 00:12:04.638 "is_configured": true, 00:12:04.638 "data_offset": 0, 00:12:04.638 "data_size": 65536 00:12:04.638 } 00:12:04.638 ] 00:12:04.639 }' 00:12:04.639 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.639 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:04.639 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.639 [2024-12-07 16:38:03.372553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:04.639 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:04.639 16:38:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:05.208 131.00 IOPS, 393.00 MiB/s [2024-12-07T16:38:04.107Z] [2024-12-07 16:38:04.027863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:05.466 [2024-12-07 16:38:04.347518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.726 "name": "raid_bdev1", 00:12:05.726 "uuid": "fede7518-77ef-4960-9a4e-6b89a66a437e", 00:12:05.726 "strip_size_kb": 0, 00:12:05.726 "state": "online", 00:12:05.726 "raid_level": "raid1", 00:12:05.726 "superblock": false, 00:12:05.726 "num_base_bdevs": 2, 00:12:05.726 "num_base_bdevs_discovered": 2, 00:12:05.726 "num_base_bdevs_operational": 2, 00:12:05.726 "process": { 00:12:05.726 "type": "rebuild", 00:12:05.726 "target": "spare", 00:12:05.726 "progress": { 00:12:05.726 "blocks": 51200, 00:12:05.726 "percent": 78 00:12:05.726 } 00:12:05.726 }, 00:12:05.726 "base_bdevs_list": [ 00:12:05.726 { 00:12:05.726 "name": "spare", 00:12:05.726 "uuid": "1353958b-f307-5917-b3aa-3a596dcf25d7", 00:12:05.726 "is_configured": true, 00:12:05.726 "data_offset": 0, 00:12:05.726 "data_size": 65536 00:12:05.726 }, 00:12:05.726 { 00:12:05.726 "name": "BaseBdev2", 00:12:05.726 "uuid": "1216ac39-f2ea-593a-97f6-50181a751f78", 00:12:05.726 "is_configured": true, 00:12:05.726 "data_offset": 0, 00:12:05.726 "data_size": 65536 00:12:05.726 } 00:12:05.726 ] 00:12:05.726 }' 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.726 116.17 IOPS, 348.50 MiB/s [2024-12-07T16:38:04.625Z] 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.726 16:38:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:05.986 [2024-12-07 16:38:04.804595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:06.555 [2024-12-07 16:38:05.245752] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:06.555 [2024-12-07 16:38:05.350753] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:06.555 [2024-12-07 16:38:05.353581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.816 104.86 IOPS, 314.57 MiB/s [2024-12-07T16:38:05.715Z] 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.816 "name": "raid_bdev1", 00:12:06.816 "uuid": "fede7518-77ef-4960-9a4e-6b89a66a437e", 00:12:06.816 "strip_size_kb": 0, 00:12:06.816 "state": "online", 00:12:06.816 "raid_level": "raid1", 00:12:06.816 "superblock": false, 00:12:06.816 "num_base_bdevs": 2, 00:12:06.816 "num_base_bdevs_discovered": 2, 00:12:06.816 "num_base_bdevs_operational": 2, 00:12:06.816 "base_bdevs_list": [ 00:12:06.816 { 00:12:06.816 "name": "spare", 00:12:06.816 "uuid": "1353958b-f307-5917-b3aa-3a596dcf25d7", 00:12:06.816 "is_configured": true, 00:12:06.816 "data_offset": 0, 00:12:06.816 "data_size": 65536 00:12:06.816 }, 00:12:06.816 { 00:12:06.816 "name": "BaseBdev2", 00:12:06.816 "uuid": "1216ac39-f2ea-593a-97f6-50181a751f78", 00:12:06.816 "is_configured": true, 00:12:06.816 "data_offset": 0, 00:12:06.816 "data_size": 65536 00:12:06.816 } 00:12:06.816 ] 00:12:06.816 }' 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.816 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.075 "name": "raid_bdev1", 00:12:07.075 "uuid": "fede7518-77ef-4960-9a4e-6b89a66a437e", 00:12:07.075 "strip_size_kb": 0, 00:12:07.075 "state": "online", 00:12:07.075 "raid_level": "raid1", 00:12:07.075 "superblock": false, 00:12:07.075 "num_base_bdevs": 2, 00:12:07.075 "num_base_bdevs_discovered": 2, 00:12:07.075 "num_base_bdevs_operational": 2, 00:12:07.075 "base_bdevs_list": [ 00:12:07.075 { 00:12:07.075 "name": "spare", 00:12:07.075 "uuid": "1353958b-f307-5917-b3aa-3a596dcf25d7", 00:12:07.075 "is_configured": true, 00:12:07.075 "data_offset": 0, 00:12:07.075 "data_size": 65536 00:12:07.075 }, 00:12:07.075 { 00:12:07.075 "name": "BaseBdev2", 00:12:07.075 "uuid": "1216ac39-f2ea-593a-97f6-50181a751f78", 00:12:07.075 "is_configured": true, 00:12:07.075 "data_offset": 0, 00:12:07.075 "data_size": 65536 00:12:07.075 } 00:12:07.075 ] 00:12:07.075 }' 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.075 "name": "raid_bdev1", 00:12:07.075 "uuid": "fede7518-77ef-4960-9a4e-6b89a66a437e", 00:12:07.075 "strip_size_kb": 0, 00:12:07.075 "state": "online", 00:12:07.075 "raid_level": "raid1", 00:12:07.075 "superblock": false, 00:12:07.075 "num_base_bdevs": 2, 00:12:07.075 "num_base_bdevs_discovered": 2, 00:12:07.075 "num_base_bdevs_operational": 2, 00:12:07.075 "base_bdevs_list": [ 00:12:07.075 { 00:12:07.075 "name": "spare", 00:12:07.075 "uuid": "1353958b-f307-5917-b3aa-3a596dcf25d7", 00:12:07.075 "is_configured": true, 00:12:07.075 "data_offset": 0, 00:12:07.075 "data_size": 65536 00:12:07.075 }, 00:12:07.075 { 00:12:07.075 "name": "BaseBdev2", 00:12:07.075 "uuid": "1216ac39-f2ea-593a-97f6-50181a751f78", 00:12:07.075 "is_configured": true, 00:12:07.075 "data_offset": 0, 00:12:07.075 "data_size": 65536 00:12:07.075 } 00:12:07.075 ] 00:12:07.075 }' 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.075 16:38:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.642 [2024-12-07 16:38:06.329373] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.642 [2024-12-07 16:38:06.329423] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.642 00:12:07.642 Latency(us) 00:12:07.642 [2024-12-07T16:38:06.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.642 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:07.642 raid_bdev1 : 7.96 96.08 288.23 0.00 0.00 14239.06 287.97 112641.79 00:12:07.642 [2024-12-07T16:38:06.541Z] =================================================================================================================== 00:12:07.642 [2024-12-07T16:38:06.541Z] Total : 96.08 288.23 0.00 0.00 14239.06 287.97 112641.79 00:12:07.642 [2024-12-07 16:38:06.441830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.642 [2024-12-07 16:38:06.441909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.642 [2024-12-07 16:38:06.442018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.642 [2024-12-07 16:38:06.442034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:07.642 { 00:12:07.642 "results": [ 00:12:07.642 { 00:12:07.642 "job": "raid_bdev1", 00:12:07.642 "core_mask": "0x1", 00:12:07.642 "workload": "randrw", 00:12:07.642 "percentage": 50, 00:12:07.642 "status": "finished", 00:12:07.642 "queue_depth": 2, 00:12:07.642 "io_size": 3145728, 00:12:07.642 "runtime": 7.962441, 00:12:07.642 "iops": 96.07606511621248, 00:12:07.642 "mibps": 288.2281953486374, 00:12:07.642 "io_failed": 0, 00:12:07.642 "io_timeout": 0, 00:12:07.642 "avg_latency_us": 14239.061216428347, 00:12:07.642 "min_latency_us": 287.97205240174674, 00:12:07.642 "max_latency_us": 112641.78864628822 00:12:07.642 } 00:12:07.642 ], 00:12:07.642 "core_count": 1 00:12:07.642 } 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:07.642 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:07.905 /dev/nbd0 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.905 1+0 records in 00:12:07.905 1+0 records out 00:12:07.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458875 s, 8.9 MB/s 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:07.905 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.169 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:08.169 16:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:08.169 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.169 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:08.169 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:08.169 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:08.169 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:08.169 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:08.169 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:08.169 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:08.170 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:08.170 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:08.170 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:08.170 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:08.170 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:08.170 16:38:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:08.170 /dev/nbd1 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.428 1+0 records in 00:12:08.428 1+0 records out 00:12:08.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246462 s, 16.6 MB/s 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:08.428 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:08.687 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87462 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87462 ']' 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87462 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87462 00:12:08.946 killing process with pid 87462 00:12:08.946 Received shutdown signal, test time was about 9.281801 seconds 00:12:08.946 00:12:08.946 Latency(us) 00:12:08.946 [2024-12-07T16:38:07.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.946 [2024-12-07T16:38:07.845Z] =================================================================================================================== 00:12:08.946 [2024-12-07T16:38:07.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87462' 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87462 00:12:08.946 [2024-12-07 16:38:07.754113] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:08.946 16:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87462 00:12:08.946 [2024-12-07 16:38:07.803397] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.515 16:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:09.515 00:12:09.515 real 0m11.394s 00:12:09.515 user 0m14.676s 00:12:09.515 sys 0m1.584s 00:12:09.515 16:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.515 16:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.515 ************************************ 00:12:09.515 END TEST raid_rebuild_test_io 00:12:09.515 ************************************ 00:12:09.515 16:38:08 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:09.515 16:38:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:09.515 16:38:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.515 16:38:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.515 ************************************ 00:12:09.515 START TEST raid_rebuild_test_sb_io 00:12:09.515 ************************************ 00:12:09.515 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:12:09.515 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:09.515 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:09.515 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:09.515 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:09.515 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:09.515 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87827 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87827 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87827 ']' 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.516 16:38:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.516 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:09.516 Zero copy mechanism will not be used. 00:12:09.516 [2024-12-07 16:38:08.357961] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:09.516 [2024-12-07 16:38:08.358098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87827 ] 00:12:09.775 [2024-12-07 16:38:08.523830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.775 [2024-12-07 16:38:08.601490] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.034 [2024-12-07 16:38:08.680727] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.034 [2024-12-07 16:38:08.680776] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.615 BaseBdev1_malloc 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.615 [2024-12-07 16:38:09.257579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:10.615 [2024-12-07 16:38:09.257669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.615 [2024-12-07 16:38:09.257701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:10.615 [2024-12-07 16:38:09.257721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.615 [2024-12-07 16:38:09.260186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.615 [2024-12-07 16:38:09.260227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.615 BaseBdev1 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.615 BaseBdev2_malloc 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:10.615 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.616 [2024-12-07 16:38:09.301854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:10.616 [2024-12-07 16:38:09.301926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.616 [2024-12-07 16:38:09.301951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:10.616 [2024-12-07 16:38:09.301961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.616 [2024-12-07 16:38:09.304680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.616 [2024-12-07 16:38:09.304718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:10.616 BaseBdev2 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.616 spare_malloc 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.616 spare_delay 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.616 [2024-12-07 16:38:09.348863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:10.616 [2024-12-07 16:38:09.348936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.616 [2024-12-07 16:38:09.348960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:10.616 [2024-12-07 16:38:09.348969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.616 [2024-12-07 16:38:09.351395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.616 [2024-12-07 16:38:09.351426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:10.616 spare 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.616 [2024-12-07 16:38:09.360894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.616 [2024-12-07 16:38:09.362992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.616 [2024-12-07 16:38:09.363164] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:10.616 [2024-12-07 16:38:09.363192] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.616 [2024-12-07 16:38:09.363475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:10.616 [2024-12-07 16:38:09.363618] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:10.616 [2024-12-07 16:38:09.363667] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:10.616 [2024-12-07 16:38:09.363817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.616 "name": "raid_bdev1", 00:12:10.616 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:10.616 "strip_size_kb": 0, 00:12:10.616 "state": "online", 00:12:10.616 "raid_level": "raid1", 00:12:10.616 "superblock": true, 00:12:10.616 "num_base_bdevs": 2, 00:12:10.616 "num_base_bdevs_discovered": 2, 00:12:10.616 "num_base_bdevs_operational": 2, 00:12:10.616 "base_bdevs_list": [ 00:12:10.616 { 00:12:10.616 "name": "BaseBdev1", 00:12:10.616 "uuid": "89e9ece9-4a1a-5665-8d2f-ffdc18eee7d5", 00:12:10.616 "is_configured": true, 00:12:10.616 "data_offset": 2048, 00:12:10.616 "data_size": 63488 00:12:10.616 }, 00:12:10.616 { 00:12:10.616 "name": "BaseBdev2", 00:12:10.616 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:10.616 "is_configured": true, 00:12:10.616 "data_offset": 2048, 00:12:10.616 "data_size": 63488 00:12:10.616 } 00:12:10.616 ] 00:12:10.616 }' 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.616 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.183 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:11.183 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:11.183 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.183 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.183 [2024-12-07 16:38:09.832410] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.183 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.183 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:11.183 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.183 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.183 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.183 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:11.183 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.183 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.184 [2024-12-07 16:38:09.927913] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.184 "name": "raid_bdev1", 00:12:11.184 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:11.184 "strip_size_kb": 0, 00:12:11.184 "state": "online", 00:12:11.184 "raid_level": "raid1", 00:12:11.184 "superblock": true, 00:12:11.184 "num_base_bdevs": 2, 00:12:11.184 "num_base_bdevs_discovered": 1, 00:12:11.184 "num_base_bdevs_operational": 1, 00:12:11.184 "base_bdevs_list": [ 00:12:11.184 { 00:12:11.184 "name": null, 00:12:11.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.184 "is_configured": false, 00:12:11.184 "data_offset": 0, 00:12:11.184 "data_size": 63488 00:12:11.184 }, 00:12:11.184 { 00:12:11.184 "name": "BaseBdev2", 00:12:11.184 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:11.184 "is_configured": true, 00:12:11.184 "data_offset": 2048, 00:12:11.184 "data_size": 63488 00:12:11.184 } 00:12:11.184 ] 00:12:11.184 }' 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.184 16:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.184 [2024-12-07 16:38:10.019324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:11.184 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:11.184 Zero copy mechanism will not be used. 00:12:11.184 Running I/O for 60 seconds... 00:12:11.751 16:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:11.751 16:38:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.751 16:38:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.751 [2024-12-07 16:38:10.377777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:11.751 16:38:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.751 16:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:11.751 [2024-12-07 16:38:10.413852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:11.751 [2024-12-07 16:38:10.416135] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:11.751 [2024-12-07 16:38:10.524553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:11.751 [2024-12-07 16:38:10.525249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:12.010 [2024-12-07 16:38:10.734858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:12.010 [2024-12-07 16:38:10.735359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:12.268 213.00 IOPS, 639.00 MiB/s [2024-12-07T16:38:11.167Z] [2024-12-07 16:38:11.057797] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:12.268 [2024-12-07 16:38:11.058225] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:12.527 [2024-12-07 16:38:11.285096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:12.527 [2024-12-07 16:38:11.285582] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:12.527 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.527 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.527 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.527 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.527 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.527 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.527 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.527 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.527 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.784 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.784 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.784 "name": "raid_bdev1", 00:12:12.784 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:12.784 "strip_size_kb": 0, 00:12:12.784 "state": "online", 00:12:12.784 "raid_level": "raid1", 00:12:12.784 "superblock": true, 00:12:12.784 "num_base_bdevs": 2, 00:12:12.784 "num_base_bdevs_discovered": 2, 00:12:12.784 "num_base_bdevs_operational": 2, 00:12:12.784 "process": { 00:12:12.784 "type": "rebuild", 00:12:12.784 "target": "spare", 00:12:12.784 "progress": { 00:12:12.784 "blocks": 10240, 00:12:12.784 "percent": 16 00:12:12.784 } 00:12:12.784 }, 00:12:12.784 "base_bdevs_list": [ 00:12:12.784 { 00:12:12.784 "name": "spare", 00:12:12.784 "uuid": "a8c708fd-8d8f-56aa-9426-bc4fae97d1eb", 00:12:12.784 "is_configured": true, 00:12:12.784 "data_offset": 2048, 00:12:12.784 "data_size": 63488 00:12:12.784 }, 00:12:12.784 { 00:12:12.784 "name": "BaseBdev2", 00:12:12.784 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:12.784 "is_configured": true, 00:12:12.784 "data_offset": 2048, 00:12:12.784 "data_size": 63488 00:12:12.784 } 00:12:12.784 ] 00:12:12.784 }' 00:12:12.784 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.784 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.784 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.784 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.784 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:12.784 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.784 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.784 [2024-12-07 16:38:11.570172] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:12.784 [2024-12-07 16:38:11.619823] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:12.784 [2024-12-07 16:38:11.633747] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:12.785 [2024-12-07 16:38:11.642052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.785 [2024-12-07 16:38:11.642089] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:12.785 [2024-12-07 16:38:11.642105] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:12.785 [2024-12-07 16:38:11.663197] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:12:12.785 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.785 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:12.785 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.785 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.785 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.785 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.785 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:12.785 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.785 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.785 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.785 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.042 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.042 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.042 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.042 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.042 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.042 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.042 "name": "raid_bdev1", 00:12:13.042 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:13.042 "strip_size_kb": 0, 00:12:13.042 "state": "online", 00:12:13.042 "raid_level": "raid1", 00:12:13.042 "superblock": true, 00:12:13.042 "num_base_bdevs": 2, 00:12:13.042 "num_base_bdevs_discovered": 1, 00:12:13.042 "num_base_bdevs_operational": 1, 00:12:13.042 "base_bdevs_list": [ 00:12:13.042 { 00:12:13.042 "name": null, 00:12:13.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.042 "is_configured": false, 00:12:13.042 "data_offset": 0, 00:12:13.042 "data_size": 63488 00:12:13.042 }, 00:12:13.042 { 00:12:13.042 "name": "BaseBdev2", 00:12:13.042 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:13.042 "is_configured": true, 00:12:13.042 "data_offset": 2048, 00:12:13.042 "data_size": 63488 00:12:13.042 } 00:12:13.042 ] 00:12:13.042 }' 00:12:13.042 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.042 16:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.302 188.00 IOPS, 564.00 MiB/s [2024-12-07T16:38:12.201Z] 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.302 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.302 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.302 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.302 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.302 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.302 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.302 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.302 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.302 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.302 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.302 "name": "raid_bdev1", 00:12:13.302 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:13.302 "strip_size_kb": 0, 00:12:13.302 "state": "online", 00:12:13.302 "raid_level": "raid1", 00:12:13.302 "superblock": true, 00:12:13.302 "num_base_bdevs": 2, 00:12:13.302 "num_base_bdevs_discovered": 1, 00:12:13.302 "num_base_bdevs_operational": 1, 00:12:13.302 "base_bdevs_list": [ 00:12:13.302 { 00:12:13.302 "name": null, 00:12:13.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.302 "is_configured": false, 00:12:13.302 "data_offset": 0, 00:12:13.302 "data_size": 63488 00:12:13.302 }, 00:12:13.302 { 00:12:13.302 "name": "BaseBdev2", 00:12:13.302 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:13.302 "is_configured": true, 00:12:13.302 "data_offset": 2048, 00:12:13.302 "data_size": 63488 00:12:13.302 } 00:12:13.302 ] 00:12:13.302 }' 00:12:13.302 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.560 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:13.560 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.560 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:13.560 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:13.560 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.560 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.560 [2024-12-07 16:38:12.262834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:13.560 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.561 16:38:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:13.561 [2024-12-07 16:38:12.316725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:13.561 [2024-12-07 16:38:12.319008] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:13.561 [2024-12-07 16:38:12.442869] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:13.561 [2024-12-07 16:38:12.443599] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:13.819 [2024-12-07 16:38:12.653500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:13.819 [2024-12-07 16:38:12.653744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:14.387 [2024-12-07 16:38:12.995997] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:14.387 [2024-12-07 16:38:12.996766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:14.387 179.33 IOPS, 538.00 MiB/s [2024-12-07T16:38:13.286Z] [2024-12-07 16:38:13.224249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:14.387 [2024-12-07 16:38:13.224689] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.646 "name": "raid_bdev1", 00:12:14.646 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:14.646 "strip_size_kb": 0, 00:12:14.646 "state": "online", 00:12:14.646 "raid_level": "raid1", 00:12:14.646 "superblock": true, 00:12:14.646 "num_base_bdevs": 2, 00:12:14.646 "num_base_bdevs_discovered": 2, 00:12:14.646 "num_base_bdevs_operational": 2, 00:12:14.646 "process": { 00:12:14.646 "type": "rebuild", 00:12:14.646 "target": "spare", 00:12:14.646 "progress": { 00:12:14.646 "blocks": 10240, 00:12:14.646 "percent": 16 00:12:14.646 } 00:12:14.646 }, 00:12:14.646 "base_bdevs_list": [ 00:12:14.646 { 00:12:14.646 "name": "spare", 00:12:14.646 "uuid": "a8c708fd-8d8f-56aa-9426-bc4fae97d1eb", 00:12:14.646 "is_configured": true, 00:12:14.646 "data_offset": 2048, 00:12:14.646 "data_size": 63488 00:12:14.646 }, 00:12:14.646 { 00:12:14.646 "name": "BaseBdev2", 00:12:14.646 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:14.646 "is_configured": true, 00:12:14.646 "data_offset": 2048, 00:12:14.646 "data_size": 63488 00:12:14.646 } 00:12:14.646 ] 00:12:14.646 }' 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:14.646 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=345 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.646 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.647 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.647 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.647 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.647 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.647 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.647 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.647 "name": "raid_bdev1", 00:12:14.647 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:14.647 "strip_size_kb": 0, 00:12:14.647 "state": "online", 00:12:14.647 "raid_level": "raid1", 00:12:14.647 "superblock": true, 00:12:14.647 "num_base_bdevs": 2, 00:12:14.647 "num_base_bdevs_discovered": 2, 00:12:14.647 "num_base_bdevs_operational": 2, 00:12:14.647 "process": { 00:12:14.647 "type": "rebuild", 00:12:14.647 "target": "spare", 00:12:14.647 "progress": { 00:12:14.647 "blocks": 12288, 00:12:14.647 "percent": 19 00:12:14.647 } 00:12:14.647 }, 00:12:14.647 "base_bdevs_list": [ 00:12:14.647 { 00:12:14.647 "name": "spare", 00:12:14.647 "uuid": "a8c708fd-8d8f-56aa-9426-bc4fae97d1eb", 00:12:14.647 "is_configured": true, 00:12:14.647 "data_offset": 2048, 00:12:14.647 "data_size": 63488 00:12:14.647 }, 00:12:14.647 { 00:12:14.647 "name": "BaseBdev2", 00:12:14.647 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:14.647 "is_configured": true, 00:12:14.647 "data_offset": 2048, 00:12:14.647 "data_size": 63488 00:12:14.647 } 00:12:14.647 ] 00:12:14.647 }' 00:12:14.647 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.647 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.647 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.906 [2024-12-07 16:38:13.559944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:14.906 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.906 16:38:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:14.906 [2024-12-07 16:38:13.675294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:15.164 [2024-12-07 16:38:13.923705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:15.423 151.50 IOPS, 454.50 MiB/s [2024-12-07T16:38:14.322Z] [2024-12-07 16:38:14.127344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:15.423 [2024-12-07 16:38:14.127612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:15.682 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.682 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.682 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.941 "name": "raid_bdev1", 00:12:15.941 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:15.941 "strip_size_kb": 0, 00:12:15.941 "state": "online", 00:12:15.941 "raid_level": "raid1", 00:12:15.941 "superblock": true, 00:12:15.941 "num_base_bdevs": 2, 00:12:15.941 "num_base_bdevs_discovered": 2, 00:12:15.941 "num_base_bdevs_operational": 2, 00:12:15.941 "process": { 00:12:15.941 "type": "rebuild", 00:12:15.941 "target": "spare", 00:12:15.941 "progress": { 00:12:15.941 "blocks": 30720, 00:12:15.941 "percent": 48 00:12:15.941 } 00:12:15.941 }, 00:12:15.941 "base_bdevs_list": [ 00:12:15.941 { 00:12:15.941 "name": "spare", 00:12:15.941 "uuid": "a8c708fd-8d8f-56aa-9426-bc4fae97d1eb", 00:12:15.941 "is_configured": true, 00:12:15.941 "data_offset": 2048, 00:12:15.941 "data_size": 63488 00:12:15.941 }, 00:12:15.941 { 00:12:15.941 "name": "BaseBdev2", 00:12:15.941 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:15.941 "is_configured": true, 00:12:15.941 "data_offset": 2048, 00:12:15.941 "data_size": 63488 00:12:15.941 } 00:12:15.941 ] 00:12:15.941 }' 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.941 16:38:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:16.460 132.20 IOPS, 396.60 MiB/s [2024-12-07T16:38:15.359Z] [2024-12-07 16:38:15.318145] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:16.725 [2024-12-07 16:38:15.540069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.003 "name": "raid_bdev1", 00:12:17.003 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:17.003 "strip_size_kb": 0, 00:12:17.003 "state": "online", 00:12:17.003 "raid_level": "raid1", 00:12:17.003 "superblock": true, 00:12:17.003 "num_base_bdevs": 2, 00:12:17.003 "num_base_bdevs_discovered": 2, 00:12:17.003 "num_base_bdevs_operational": 2, 00:12:17.003 "process": { 00:12:17.003 "type": "rebuild", 00:12:17.003 "target": "spare", 00:12:17.003 "progress": { 00:12:17.003 "blocks": 47104, 00:12:17.003 "percent": 74 00:12:17.003 } 00:12:17.003 }, 00:12:17.003 "base_bdevs_list": [ 00:12:17.003 { 00:12:17.003 "name": "spare", 00:12:17.003 "uuid": "a8c708fd-8d8f-56aa-9426-bc4fae97d1eb", 00:12:17.003 "is_configured": true, 00:12:17.003 "data_offset": 2048, 00:12:17.003 "data_size": 63488 00:12:17.003 }, 00:12:17.003 { 00:12:17.003 "name": "BaseBdev2", 00:12:17.003 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:17.003 "is_configured": true, 00:12:17.003 "data_offset": 2048, 00:12:17.003 "data_size": 63488 00:12:17.003 } 00:12:17.003 ] 00:12:17.003 }' 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.003 [2024-12-07 16:38:15.863873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:17.003 16:38:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:17.263 115.17 IOPS, 345.50 MiB/s [2024-12-07T16:38:16.162Z] [2024-12-07 16:38:16.070842] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:17.263 [2024-12-07 16:38:16.071308] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:17.835 [2024-12-07 16:38:16.716030] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:18.095 [2024-12-07 16:38:16.821356] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:18.095 [2024-12-07 16:38:16.824761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.095 "name": "raid_bdev1", 00:12:18.095 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:18.095 "strip_size_kb": 0, 00:12:18.095 "state": "online", 00:12:18.095 "raid_level": "raid1", 00:12:18.095 "superblock": true, 00:12:18.095 "num_base_bdevs": 2, 00:12:18.095 "num_base_bdevs_discovered": 2, 00:12:18.095 "num_base_bdevs_operational": 2, 00:12:18.095 "base_bdevs_list": [ 00:12:18.095 { 00:12:18.095 "name": "spare", 00:12:18.095 "uuid": "a8c708fd-8d8f-56aa-9426-bc4fae97d1eb", 00:12:18.095 "is_configured": true, 00:12:18.095 "data_offset": 2048, 00:12:18.095 "data_size": 63488 00:12:18.095 }, 00:12:18.095 { 00:12:18.095 "name": "BaseBdev2", 00:12:18.095 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:18.095 "is_configured": true, 00:12:18.095 "data_offset": 2048, 00:12:18.095 "data_size": 63488 00:12:18.095 } 00:12:18.095 ] 00:12:18.095 }' 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.095 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:18.356 16:38:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:18.356 102.71 IOPS, 308.14 MiB/s [2024-12-07T16:38:17.255Z] 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.356 "name": "raid_bdev1", 00:12:18.356 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:18.356 "strip_size_kb": 0, 00:12:18.356 "state": "online", 00:12:18.356 "raid_level": "raid1", 00:12:18.356 "superblock": true, 00:12:18.356 "num_base_bdevs": 2, 00:12:18.356 "num_base_bdevs_discovered": 2, 00:12:18.356 "num_base_bdevs_operational": 2, 00:12:18.356 "base_bdevs_list": [ 00:12:18.356 { 00:12:18.356 "name": "spare", 00:12:18.356 "uuid": "a8c708fd-8d8f-56aa-9426-bc4fae97d1eb", 00:12:18.356 "is_configured": true, 00:12:18.356 "data_offset": 2048, 00:12:18.356 "data_size": 63488 00:12:18.356 }, 00:12:18.356 { 00:12:18.356 "name": "BaseBdev2", 00:12:18.356 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:18.356 "is_configured": true, 00:12:18.356 "data_offset": 2048, 00:12:18.356 "data_size": 63488 00:12:18.356 } 00:12:18.356 ] 00:12:18.356 }' 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.356 "name": "raid_bdev1", 00:12:18.356 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:18.356 "strip_size_kb": 0, 00:12:18.356 "state": "online", 00:12:18.356 "raid_level": "raid1", 00:12:18.356 "superblock": true, 00:12:18.356 "num_base_bdevs": 2, 00:12:18.356 "num_base_bdevs_discovered": 2, 00:12:18.356 "num_base_bdevs_operational": 2, 00:12:18.356 "base_bdevs_list": [ 00:12:18.356 { 00:12:18.356 "name": "spare", 00:12:18.356 "uuid": "a8c708fd-8d8f-56aa-9426-bc4fae97d1eb", 00:12:18.356 "is_configured": true, 00:12:18.356 "data_offset": 2048, 00:12:18.356 "data_size": 63488 00:12:18.356 }, 00:12:18.356 { 00:12:18.356 "name": "BaseBdev2", 00:12:18.356 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:18.356 "is_configured": true, 00:12:18.356 "data_offset": 2048, 00:12:18.356 "data_size": 63488 00:12:18.356 } 00:12:18.356 ] 00:12:18.356 }' 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.356 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.926 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.926 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.926 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.926 [2024-12-07 16:38:17.651181] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.926 [2024-12-07 16:38:17.651242] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.926 00:12:18.926 Latency(us) 00:12:18.926 [2024-12-07T16:38:17.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.926 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:18.926 raid_bdev1 : 7.69 97.11 291.34 0.00 0.00 12855.19 273.66 109894.43 00:12:18.926 [2024-12-07T16:38:17.825Z] =================================================================================================================== 00:12:18.926 [2024-12-07T16:38:17.825Z] Total : 97.11 291.34 0.00 0.00 12855.19 273.66 109894.43 00:12:18.926 [2024-12-07 16:38:17.703598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.926 [2024-12-07 16:38:17.703652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.926 [2024-12-07 16:38:17.703757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.927 [2024-12-07 16:38:17.703771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:18.927 { 00:12:18.927 "results": [ 00:12:18.927 { 00:12:18.927 "job": "raid_bdev1", 00:12:18.927 "core_mask": "0x1", 00:12:18.927 "workload": "randrw", 00:12:18.927 "percentage": 50, 00:12:18.927 "status": "finished", 00:12:18.927 "queue_depth": 2, 00:12:18.927 "io_size": 3145728, 00:12:18.927 "runtime": 7.691989, 00:12:18.927 "iops": 97.11402343399087, 00:12:18.927 "mibps": 291.3420703019726, 00:12:18.927 "io_failed": 0, 00:12:18.927 "io_timeout": 0, 00:12:18.927 "avg_latency_us": 12855.190034081012, 00:12:18.927 "min_latency_us": 273.6628820960699, 00:12:18.927 "max_latency_us": 109894.42794759825 00:12:18.927 } 00:12:18.927 ], 00:12:18.927 "core_count": 1 00:12:18.927 } 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.927 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:19.188 /dev/nbd0 00:12:19.188 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.188 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.188 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:19.188 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:19.188 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.188 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.188 16:38:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.188 1+0 records in 00:12:19.188 1+0 records out 00:12:19.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250644 s, 16.3 MB/s 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.188 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:19.449 /dev/nbd1 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.449 1+0 records in 00:12:19.449 1+0 records out 00:12:19.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529277 s, 7.7 MB/s 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.449 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:19.708 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:19.708 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.708 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:19.708 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:19.708 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:19.708 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.708 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:19.708 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:19.708 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:19.708 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:19.708 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.708 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.708 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.968 [2024-12-07 16:38:18.857725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:19.968 [2024-12-07 16:38:18.857785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.968 [2024-12-07 16:38:18.857810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:19.968 [2024-12-07 16:38:18.857820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.968 [2024-12-07 16:38:18.860463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.968 [2024-12-07 16:38:18.860496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:19.968 [2024-12-07 16:38:18.860601] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:19.968 [2024-12-07 16:38:18.860681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:19.968 [2024-12-07 16:38:18.860824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.968 spare 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.968 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.227 [2024-12-07 16:38:18.960740] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:20.227 [2024-12-07 16:38:18.960795] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:20.227 [2024-12-07 16:38:18.961183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:12:20.227 [2024-12-07 16:38:18.961392] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:20.227 [2024-12-07 16:38:18.961410] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:20.227 [2024-12-07 16:38:18.961626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.227 16:38:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.227 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.227 "name": "raid_bdev1", 00:12:20.227 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:20.227 "strip_size_kb": 0, 00:12:20.227 "state": "online", 00:12:20.227 "raid_level": "raid1", 00:12:20.227 "superblock": true, 00:12:20.227 "num_base_bdevs": 2, 00:12:20.227 "num_base_bdevs_discovered": 2, 00:12:20.227 "num_base_bdevs_operational": 2, 00:12:20.227 "base_bdevs_list": [ 00:12:20.227 { 00:12:20.227 "name": "spare", 00:12:20.227 "uuid": "a8c708fd-8d8f-56aa-9426-bc4fae97d1eb", 00:12:20.227 "is_configured": true, 00:12:20.227 "data_offset": 2048, 00:12:20.227 "data_size": 63488 00:12:20.227 }, 00:12:20.227 { 00:12:20.227 "name": "BaseBdev2", 00:12:20.227 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:20.227 "is_configured": true, 00:12:20.227 "data_offset": 2048, 00:12:20.227 "data_size": 63488 00:12:20.227 } 00:12:20.227 ] 00:12:20.227 }' 00:12:20.227 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.227 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.797 "name": "raid_bdev1", 00:12:20.797 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:20.797 "strip_size_kb": 0, 00:12:20.797 "state": "online", 00:12:20.797 "raid_level": "raid1", 00:12:20.797 "superblock": true, 00:12:20.797 "num_base_bdevs": 2, 00:12:20.797 "num_base_bdevs_discovered": 2, 00:12:20.797 "num_base_bdevs_operational": 2, 00:12:20.797 "base_bdevs_list": [ 00:12:20.797 { 00:12:20.797 "name": "spare", 00:12:20.797 "uuid": "a8c708fd-8d8f-56aa-9426-bc4fae97d1eb", 00:12:20.797 "is_configured": true, 00:12:20.797 "data_offset": 2048, 00:12:20.797 "data_size": 63488 00:12:20.797 }, 00:12:20.797 { 00:12:20.797 "name": "BaseBdev2", 00:12:20.797 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:20.797 "is_configured": true, 00:12:20.797 "data_offset": 2048, 00:12:20.797 "data_size": 63488 00:12:20.797 } 00:12:20.797 ] 00:12:20.797 }' 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.797 [2024-12-07 16:38:19.592718] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.797 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.798 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.798 "name": "raid_bdev1", 00:12:20.798 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:20.798 "strip_size_kb": 0, 00:12:20.798 "state": "online", 00:12:20.798 "raid_level": "raid1", 00:12:20.798 "superblock": true, 00:12:20.798 "num_base_bdevs": 2, 00:12:20.798 "num_base_bdevs_discovered": 1, 00:12:20.798 "num_base_bdevs_operational": 1, 00:12:20.798 "base_bdevs_list": [ 00:12:20.798 { 00:12:20.798 "name": null, 00:12:20.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.798 "is_configured": false, 00:12:20.798 "data_offset": 0, 00:12:20.798 "data_size": 63488 00:12:20.798 }, 00:12:20.798 { 00:12:20.798 "name": "BaseBdev2", 00:12:20.798 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:20.798 "is_configured": true, 00:12:20.798 "data_offset": 2048, 00:12:20.798 "data_size": 63488 00:12:20.798 } 00:12:20.798 ] 00:12:20.798 }' 00:12:20.798 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.798 16:38:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.368 16:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:21.368 16:38:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.368 16:38:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.368 [2024-12-07 16:38:20.075968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:21.368 [2024-12-07 16:38:20.076221] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:21.368 [2024-12-07 16:38:20.076240] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:21.368 [2024-12-07 16:38:20.076279] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:21.368 [2024-12-07 16:38:20.084465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:12:21.368 16:38:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.368 16:38:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:21.368 [2024-12-07 16:38:20.086762] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.309 "name": "raid_bdev1", 00:12:22.309 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:22.309 "strip_size_kb": 0, 00:12:22.309 "state": "online", 00:12:22.309 "raid_level": "raid1", 00:12:22.309 "superblock": true, 00:12:22.309 "num_base_bdevs": 2, 00:12:22.309 "num_base_bdevs_discovered": 2, 00:12:22.309 "num_base_bdevs_operational": 2, 00:12:22.309 "process": { 00:12:22.309 "type": "rebuild", 00:12:22.309 "target": "spare", 00:12:22.309 "progress": { 00:12:22.309 "blocks": 20480, 00:12:22.309 "percent": 32 00:12:22.309 } 00:12:22.309 }, 00:12:22.309 "base_bdevs_list": [ 00:12:22.309 { 00:12:22.309 "name": "spare", 00:12:22.309 "uuid": "a8c708fd-8d8f-56aa-9426-bc4fae97d1eb", 00:12:22.309 "is_configured": true, 00:12:22.309 "data_offset": 2048, 00:12:22.309 "data_size": 63488 00:12:22.309 }, 00:12:22.309 { 00:12:22.309 "name": "BaseBdev2", 00:12:22.309 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:22.309 "is_configured": true, 00:12:22.309 "data_offset": 2048, 00:12:22.309 "data_size": 63488 00:12:22.309 } 00:12:22.309 ] 00:12:22.309 }' 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.309 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.569 [2024-12-07 16:38:21.251914] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:22.569 [2024-12-07 16:38:21.295735] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:22.569 [2024-12-07 16:38:21.295808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.569 [2024-12-07 16:38:21.295825] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:22.569 [2024-12-07 16:38:21.295836] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.569 "name": "raid_bdev1", 00:12:22.569 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:22.569 "strip_size_kb": 0, 00:12:22.569 "state": "online", 00:12:22.569 "raid_level": "raid1", 00:12:22.569 "superblock": true, 00:12:22.569 "num_base_bdevs": 2, 00:12:22.569 "num_base_bdevs_discovered": 1, 00:12:22.569 "num_base_bdevs_operational": 1, 00:12:22.569 "base_bdevs_list": [ 00:12:22.569 { 00:12:22.569 "name": null, 00:12:22.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.569 "is_configured": false, 00:12:22.569 "data_offset": 0, 00:12:22.569 "data_size": 63488 00:12:22.569 }, 00:12:22.569 { 00:12:22.569 "name": "BaseBdev2", 00:12:22.569 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:22.569 "is_configured": true, 00:12:22.569 "data_offset": 2048, 00:12:22.569 "data_size": 63488 00:12:22.569 } 00:12:22.569 ] 00:12:22.569 }' 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.569 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.139 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:23.139 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.139 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.139 [2024-12-07 16:38:21.739508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:23.139 [2024-12-07 16:38:21.739605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.139 [2024-12-07 16:38:21.739631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:23.139 [2024-12-07 16:38:21.739644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.139 [2024-12-07 16:38:21.740179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.139 [2024-12-07 16:38:21.740201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:23.139 [2024-12-07 16:38:21.740310] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:23.139 [2024-12-07 16:38:21.740327] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:23.139 [2024-12-07 16:38:21.740338] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:23.139 [2024-12-07 16:38:21.740393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:23.139 [2024-12-07 16:38:21.748499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:23.139 spare 00:12:23.139 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.139 16:38:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:23.139 [2024-12-07 16:38:21.750722] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.077 "name": "raid_bdev1", 00:12:24.077 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:24.077 "strip_size_kb": 0, 00:12:24.077 "state": "online", 00:12:24.077 "raid_level": "raid1", 00:12:24.077 "superblock": true, 00:12:24.077 "num_base_bdevs": 2, 00:12:24.077 "num_base_bdevs_discovered": 2, 00:12:24.077 "num_base_bdevs_operational": 2, 00:12:24.077 "process": { 00:12:24.077 "type": "rebuild", 00:12:24.077 "target": "spare", 00:12:24.077 "progress": { 00:12:24.077 "blocks": 20480, 00:12:24.077 "percent": 32 00:12:24.077 } 00:12:24.077 }, 00:12:24.077 "base_bdevs_list": [ 00:12:24.077 { 00:12:24.077 "name": "spare", 00:12:24.077 "uuid": "a8c708fd-8d8f-56aa-9426-bc4fae97d1eb", 00:12:24.077 "is_configured": true, 00:12:24.077 "data_offset": 2048, 00:12:24.077 "data_size": 63488 00:12:24.077 }, 00:12:24.077 { 00:12:24.077 "name": "BaseBdev2", 00:12:24.077 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:24.077 "is_configured": true, 00:12:24.077 "data_offset": 2048, 00:12:24.077 "data_size": 63488 00:12:24.077 } 00:12:24.077 ] 00:12:24.077 }' 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.077 [2024-12-07 16:38:22.911074] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:24.077 [2024-12-07 16:38:22.959766] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:24.077 [2024-12-07 16:38:22.959823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.077 [2024-12-07 16:38:22.959842] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:24.077 [2024-12-07 16:38:22.959850] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:24.077 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.336 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.336 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.336 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.336 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:24.336 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.336 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.336 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.336 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.336 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.336 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.336 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.336 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.336 16:38:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.336 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.336 "name": "raid_bdev1", 00:12:24.336 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:24.336 "strip_size_kb": 0, 00:12:24.336 "state": "online", 00:12:24.336 "raid_level": "raid1", 00:12:24.336 "superblock": true, 00:12:24.336 "num_base_bdevs": 2, 00:12:24.336 "num_base_bdevs_discovered": 1, 00:12:24.336 "num_base_bdevs_operational": 1, 00:12:24.336 "base_bdevs_list": [ 00:12:24.336 { 00:12:24.336 "name": null, 00:12:24.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.336 "is_configured": false, 00:12:24.336 "data_offset": 0, 00:12:24.336 "data_size": 63488 00:12:24.336 }, 00:12:24.336 { 00:12:24.336 "name": "BaseBdev2", 00:12:24.336 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:24.336 "is_configured": true, 00:12:24.336 "data_offset": 2048, 00:12:24.336 "data_size": 63488 00:12:24.336 } 00:12:24.336 ] 00:12:24.336 }' 00:12:24.336 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.336 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.595 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:24.595 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.595 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:24.595 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:24.595 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.595 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.595 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.595 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.595 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.595 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.595 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.595 "name": "raid_bdev1", 00:12:24.595 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:24.595 "strip_size_kb": 0, 00:12:24.595 "state": "online", 00:12:24.595 "raid_level": "raid1", 00:12:24.595 "superblock": true, 00:12:24.595 "num_base_bdevs": 2, 00:12:24.595 "num_base_bdevs_discovered": 1, 00:12:24.595 "num_base_bdevs_operational": 1, 00:12:24.595 "base_bdevs_list": [ 00:12:24.595 { 00:12:24.595 "name": null, 00:12:24.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.595 "is_configured": false, 00:12:24.595 "data_offset": 0, 00:12:24.595 "data_size": 63488 00:12:24.595 }, 00:12:24.595 { 00:12:24.595 "name": "BaseBdev2", 00:12:24.595 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:24.595 "is_configured": true, 00:12:24.595 "data_offset": 2048, 00:12:24.595 "data_size": 63488 00:12:24.595 } 00:12:24.595 ] 00:12:24.595 }' 00:12:24.595 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.855 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.855 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.855 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.855 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:24.855 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.855 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.855 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.855 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:24.855 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.855 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.855 [2024-12-07 16:38:23.555347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:24.855 [2024-12-07 16:38:23.555423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.855 [2024-12-07 16:38:23.555452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:24.855 [2024-12-07 16:38:23.555462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.855 [2024-12-07 16:38:23.555950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.855 [2024-12-07 16:38:23.555967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:24.855 [2024-12-07 16:38:23.556054] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:24.855 [2024-12-07 16:38:23.556085] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:24.855 [2024-12-07 16:38:23.556103] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:24.855 [2024-12-07 16:38:23.556114] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:24.855 BaseBdev1 00:12:24.855 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.855 16:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.794 "name": "raid_bdev1", 00:12:25.794 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:25.794 "strip_size_kb": 0, 00:12:25.794 "state": "online", 00:12:25.794 "raid_level": "raid1", 00:12:25.794 "superblock": true, 00:12:25.794 "num_base_bdevs": 2, 00:12:25.794 "num_base_bdevs_discovered": 1, 00:12:25.794 "num_base_bdevs_operational": 1, 00:12:25.794 "base_bdevs_list": [ 00:12:25.794 { 00:12:25.794 "name": null, 00:12:25.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.794 "is_configured": false, 00:12:25.794 "data_offset": 0, 00:12:25.794 "data_size": 63488 00:12:25.794 }, 00:12:25.794 { 00:12:25.794 "name": "BaseBdev2", 00:12:25.794 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:25.794 "is_configured": true, 00:12:25.794 "data_offset": 2048, 00:12:25.794 "data_size": 63488 00:12:25.794 } 00:12:25.794 ] 00:12:25.794 }' 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.794 16:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.366 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:26.366 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.366 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:26.366 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:26.366 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.366 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.366 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.366 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.366 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.366 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.366 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.366 "name": "raid_bdev1", 00:12:26.366 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:26.366 "strip_size_kb": 0, 00:12:26.366 "state": "online", 00:12:26.366 "raid_level": "raid1", 00:12:26.366 "superblock": true, 00:12:26.366 "num_base_bdevs": 2, 00:12:26.366 "num_base_bdevs_discovered": 1, 00:12:26.366 "num_base_bdevs_operational": 1, 00:12:26.366 "base_bdevs_list": [ 00:12:26.366 { 00:12:26.366 "name": null, 00:12:26.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.367 "is_configured": false, 00:12:26.367 "data_offset": 0, 00:12:26.367 "data_size": 63488 00:12:26.367 }, 00:12:26.367 { 00:12:26.367 "name": "BaseBdev2", 00:12:26.367 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:26.367 "is_configured": true, 00:12:26.367 "data_offset": 2048, 00:12:26.367 "data_size": 63488 00:12:26.367 } 00:12:26.367 ] 00:12:26.367 }' 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.367 [2024-12-07 16:38:25.212701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:26.367 [2024-12-07 16:38:25.212921] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:26.367 [2024-12-07 16:38:25.212946] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:26.367 request: 00:12:26.367 { 00:12:26.367 "base_bdev": "BaseBdev1", 00:12:26.367 "raid_bdev": "raid_bdev1", 00:12:26.367 "method": "bdev_raid_add_base_bdev", 00:12:26.367 "req_id": 1 00:12:26.367 } 00:12:26.367 Got JSON-RPC error response 00:12:26.367 response: 00:12:26.367 { 00:12:26.367 "code": -22, 00:12:26.367 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:26.367 } 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:26.367 16:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.744 "name": "raid_bdev1", 00:12:27.744 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:27.744 "strip_size_kb": 0, 00:12:27.744 "state": "online", 00:12:27.744 "raid_level": "raid1", 00:12:27.744 "superblock": true, 00:12:27.744 "num_base_bdevs": 2, 00:12:27.744 "num_base_bdevs_discovered": 1, 00:12:27.744 "num_base_bdevs_operational": 1, 00:12:27.744 "base_bdevs_list": [ 00:12:27.744 { 00:12:27.744 "name": null, 00:12:27.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.744 "is_configured": false, 00:12:27.744 "data_offset": 0, 00:12:27.744 "data_size": 63488 00:12:27.744 }, 00:12:27.744 { 00:12:27.744 "name": "BaseBdev2", 00:12:27.744 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:27.744 "is_configured": true, 00:12:27.744 "data_offset": 2048, 00:12:27.744 "data_size": 63488 00:12:27.744 } 00:12:27.744 ] 00:12:27.744 }' 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.744 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.014 "name": "raid_bdev1", 00:12:28.014 "uuid": "9c85da58-cee2-4b48-8fb8-8efb7e2c2456", 00:12:28.014 "strip_size_kb": 0, 00:12:28.014 "state": "online", 00:12:28.014 "raid_level": "raid1", 00:12:28.014 "superblock": true, 00:12:28.014 "num_base_bdevs": 2, 00:12:28.014 "num_base_bdevs_discovered": 1, 00:12:28.014 "num_base_bdevs_operational": 1, 00:12:28.014 "base_bdevs_list": [ 00:12:28.014 { 00:12:28.014 "name": null, 00:12:28.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.014 "is_configured": false, 00:12:28.014 "data_offset": 0, 00:12:28.014 "data_size": 63488 00:12:28.014 }, 00:12:28.014 { 00:12:28.014 "name": "BaseBdev2", 00:12:28.014 "uuid": "5765d80f-3f2f-5bdb-b0bf-9e4040ed5c84", 00:12:28.014 "is_configured": true, 00:12:28.014 "data_offset": 2048, 00:12:28.014 "data_size": 63488 00:12:28.014 } 00:12:28.014 ] 00:12:28.014 }' 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87827 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87827 ']' 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87827 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87827 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.014 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.015 killing process with pid 87827 00:12:28.015 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87827' 00:12:28.015 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87827 00:12:28.015 Received shutdown signal, test time was about 16.852760 seconds 00:12:28.015 00:12:28.015 Latency(us) 00:12:28.015 [2024-12-07T16:38:26.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.015 [2024-12-07T16:38:26.914Z] =================================================================================================================== 00:12:28.015 [2024-12-07T16:38:26.914Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:28.015 [2024-12-07 16:38:26.842171] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.015 16:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87827 00:12:28.015 [2024-12-07 16:38:26.842370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.015 [2024-12-07 16:38:26.842438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.015 [2024-12-07 16:38:26.842453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:28.015 [2024-12-07 16:38:26.891798] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:28.597 00:12:28.597 real 0m19.009s 00:12:28.597 user 0m25.187s 00:12:28.597 sys 0m2.420s 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.597 ************************************ 00:12:28.597 END TEST raid_rebuild_test_sb_io 00:12:28.597 ************************************ 00:12:28.597 16:38:27 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:28.597 16:38:27 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:28.597 16:38:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:28.597 16:38:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.597 16:38:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:28.597 ************************************ 00:12:28.597 START TEST raid_rebuild_test 00:12:28.597 ************************************ 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:28.597 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:28.598 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88510 00:12:28.598 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88510 00:12:28.598 16:38:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:28.598 16:38:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88510 ']' 00:12:28.598 16:38:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.598 16:38:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:28.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.598 16:38:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.598 16:38:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:28.598 16:38:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.598 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:28.598 Zero copy mechanism will not be used. 00:12:28.598 [2024-12-07 16:38:27.437744] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:28.598 [2024-12-07 16:38:27.437894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88510 ] 00:12:28.858 [2024-12-07 16:38:27.602261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.858 [2024-12-07 16:38:27.680157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.118 [2024-12-07 16:38:27.760944] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.118 [2024-12-07 16:38:27.761007] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.378 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:29.378 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:29.378 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:29.378 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:29.378 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.378 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.639 BaseBdev1_malloc 00:12:29.639 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.639 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:29.639 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.639 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.639 [2024-12-07 16:38:28.295021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:29.639 [2024-12-07 16:38:28.295115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.639 [2024-12-07 16:38:28.295148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:29.639 [2024-12-07 16:38:28.295167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.639 [2024-12-07 16:38:28.297782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.639 [2024-12-07 16:38:28.297818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:29.639 BaseBdev1 00:12:29.639 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.639 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:29.639 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:29.639 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.639 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.639 BaseBdev2_malloc 00:12:29.639 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.639 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.640 [2024-12-07 16:38:28.339849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:29.640 [2024-12-07 16:38:28.339914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.640 [2024-12-07 16:38:28.339943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:29.640 [2024-12-07 16:38:28.339953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.640 [2024-12-07 16:38:28.342704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.640 [2024-12-07 16:38:28.342737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:29.640 BaseBdev2 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.640 BaseBdev3_malloc 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.640 [2024-12-07 16:38:28.375004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:29.640 [2024-12-07 16:38:28.375054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.640 [2024-12-07 16:38:28.375083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:29.640 [2024-12-07 16:38:28.375092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.640 [2024-12-07 16:38:28.377587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.640 [2024-12-07 16:38:28.377618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:29.640 BaseBdev3 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.640 BaseBdev4_malloc 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.640 [2024-12-07 16:38:28.410810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:29.640 [2024-12-07 16:38:28.410881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.640 [2024-12-07 16:38:28.410914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:29.640 [2024-12-07 16:38:28.410923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.640 [2024-12-07 16:38:28.413647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.640 [2024-12-07 16:38:28.413680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:29.640 BaseBdev4 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.640 spare_malloc 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.640 spare_delay 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.640 [2024-12-07 16:38:28.458101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:29.640 [2024-12-07 16:38:28.458163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.640 [2024-12-07 16:38:28.458206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:29.640 [2024-12-07 16:38:28.458215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.640 [2024-12-07 16:38:28.460717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.640 [2024-12-07 16:38:28.460750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:29.640 spare 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.640 [2024-12-07 16:38:28.470181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.640 [2024-12-07 16:38:28.472383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:29.640 [2024-12-07 16:38:28.472464] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:29.640 [2024-12-07 16:38:28.472510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:29.640 [2024-12-07 16:38:28.472599] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:29.640 [2024-12-07 16:38:28.472616] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:29.640 [2024-12-07 16:38:28.472918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:29.640 [2024-12-07 16:38:28.473107] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:29.640 [2024-12-07 16:38:28.473130] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:29.640 [2024-12-07 16:38:28.473287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.640 "name": "raid_bdev1", 00:12:29.640 "uuid": "257241a4-f8ed-4ebb-abb9-cc58689dddb0", 00:12:29.640 "strip_size_kb": 0, 00:12:29.640 "state": "online", 00:12:29.640 "raid_level": "raid1", 00:12:29.640 "superblock": false, 00:12:29.640 "num_base_bdevs": 4, 00:12:29.640 "num_base_bdevs_discovered": 4, 00:12:29.640 "num_base_bdevs_operational": 4, 00:12:29.640 "base_bdevs_list": [ 00:12:29.640 { 00:12:29.640 "name": "BaseBdev1", 00:12:29.640 "uuid": "649dc737-795c-5b8f-b13f-5593e8efc335", 00:12:29.640 "is_configured": true, 00:12:29.640 "data_offset": 0, 00:12:29.640 "data_size": 65536 00:12:29.640 }, 00:12:29.640 { 00:12:29.640 "name": "BaseBdev2", 00:12:29.640 "uuid": "86b1df21-e4c4-536d-a9d9-413855b5a1fb", 00:12:29.640 "is_configured": true, 00:12:29.640 "data_offset": 0, 00:12:29.640 "data_size": 65536 00:12:29.640 }, 00:12:29.640 { 00:12:29.640 "name": "BaseBdev3", 00:12:29.640 "uuid": "991e88b0-0aa5-5d90-b2b4-8a91af340c8e", 00:12:29.640 "is_configured": true, 00:12:29.640 "data_offset": 0, 00:12:29.640 "data_size": 65536 00:12:29.640 }, 00:12:29.640 { 00:12:29.640 "name": "BaseBdev4", 00:12:29.640 "uuid": "9e5183e5-49a8-5384-9a1c-7534a7f9fb9f", 00:12:29.640 "is_configured": true, 00:12:29.640 "data_offset": 0, 00:12:29.640 "data_size": 65536 00:12:29.640 } 00:12:29.640 ] 00:12:29.640 }' 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.640 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:30.211 [2024-12-07 16:38:28.901778] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.211 16:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:30.472 [2024-12-07 16:38:29.153054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:30.472 /dev/nbd0 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.472 1+0 records in 00:12:30.472 1+0 records out 00:12:30.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230066 s, 17.8 MB/s 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:30.472 16:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:35.749 65536+0 records in 00:12:35.749 65536+0 records out 00:12:35.749 33554432 bytes (34 MB, 32 MiB) copied, 5.30425 s, 6.3 MB/s 00:12:35.749 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:35.749 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:35.749 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:35.749 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:35.749 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:35.749 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.749 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:36.008 [2024-12-07 16:38:34.732654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.008 [2024-12-07 16:38:34.748737] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.008 "name": "raid_bdev1", 00:12:36.008 "uuid": "257241a4-f8ed-4ebb-abb9-cc58689dddb0", 00:12:36.008 "strip_size_kb": 0, 00:12:36.008 "state": "online", 00:12:36.008 "raid_level": "raid1", 00:12:36.008 "superblock": false, 00:12:36.008 "num_base_bdevs": 4, 00:12:36.008 "num_base_bdevs_discovered": 3, 00:12:36.008 "num_base_bdevs_operational": 3, 00:12:36.008 "base_bdevs_list": [ 00:12:36.008 { 00:12:36.008 "name": null, 00:12:36.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.008 "is_configured": false, 00:12:36.008 "data_offset": 0, 00:12:36.008 "data_size": 65536 00:12:36.008 }, 00:12:36.008 { 00:12:36.008 "name": "BaseBdev2", 00:12:36.008 "uuid": "86b1df21-e4c4-536d-a9d9-413855b5a1fb", 00:12:36.008 "is_configured": true, 00:12:36.008 "data_offset": 0, 00:12:36.008 "data_size": 65536 00:12:36.008 }, 00:12:36.008 { 00:12:36.008 "name": "BaseBdev3", 00:12:36.008 "uuid": "991e88b0-0aa5-5d90-b2b4-8a91af340c8e", 00:12:36.008 "is_configured": true, 00:12:36.008 "data_offset": 0, 00:12:36.008 "data_size": 65536 00:12:36.008 }, 00:12:36.008 { 00:12:36.008 "name": "BaseBdev4", 00:12:36.008 "uuid": "9e5183e5-49a8-5384-9a1c-7534a7f9fb9f", 00:12:36.008 "is_configured": true, 00:12:36.008 "data_offset": 0, 00:12:36.008 "data_size": 65536 00:12:36.008 } 00:12:36.008 ] 00:12:36.008 }' 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.008 16:38:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.574 16:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:36.574 16:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.574 16:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.574 [2024-12-07 16:38:35.188049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:36.574 [2024-12-07 16:38:35.194165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:36.574 16:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.574 16:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:36.574 [2024-12-07 16:38:35.196483] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:37.510 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.510 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.510 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.510 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.510 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.510 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.510 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.510 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.510 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.510 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.510 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.510 "name": "raid_bdev1", 00:12:37.510 "uuid": "257241a4-f8ed-4ebb-abb9-cc58689dddb0", 00:12:37.510 "strip_size_kb": 0, 00:12:37.510 "state": "online", 00:12:37.510 "raid_level": "raid1", 00:12:37.510 "superblock": false, 00:12:37.510 "num_base_bdevs": 4, 00:12:37.510 "num_base_bdevs_discovered": 4, 00:12:37.511 "num_base_bdevs_operational": 4, 00:12:37.511 "process": { 00:12:37.511 "type": "rebuild", 00:12:37.511 "target": "spare", 00:12:37.511 "progress": { 00:12:37.511 "blocks": 20480, 00:12:37.511 "percent": 31 00:12:37.511 } 00:12:37.511 }, 00:12:37.511 "base_bdevs_list": [ 00:12:37.511 { 00:12:37.511 "name": "spare", 00:12:37.511 "uuid": "8633df79-02cd-5118-ae19-3d44ae954398", 00:12:37.511 "is_configured": true, 00:12:37.511 "data_offset": 0, 00:12:37.511 "data_size": 65536 00:12:37.511 }, 00:12:37.511 { 00:12:37.511 "name": "BaseBdev2", 00:12:37.511 "uuid": "86b1df21-e4c4-536d-a9d9-413855b5a1fb", 00:12:37.511 "is_configured": true, 00:12:37.511 "data_offset": 0, 00:12:37.511 "data_size": 65536 00:12:37.511 }, 00:12:37.511 { 00:12:37.511 "name": "BaseBdev3", 00:12:37.511 "uuid": "991e88b0-0aa5-5d90-b2b4-8a91af340c8e", 00:12:37.511 "is_configured": true, 00:12:37.511 "data_offset": 0, 00:12:37.511 "data_size": 65536 00:12:37.511 }, 00:12:37.511 { 00:12:37.511 "name": "BaseBdev4", 00:12:37.511 "uuid": "9e5183e5-49a8-5384-9a1c-7534a7f9fb9f", 00:12:37.511 "is_configured": true, 00:12:37.511 "data_offset": 0, 00:12:37.511 "data_size": 65536 00:12:37.511 } 00:12:37.511 ] 00:12:37.511 }' 00:12:37.511 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.511 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.511 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.511 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.511 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:37.511 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.511 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.511 [2024-12-07 16:38:36.357247] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.511 [2024-12-07 16:38:36.406091] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:37.511 [2024-12-07 16:38:36.406172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.511 [2024-12-07 16:38:36.406192] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.511 [2024-12-07 16:38:36.406201] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.770 "name": "raid_bdev1", 00:12:37.770 "uuid": "257241a4-f8ed-4ebb-abb9-cc58689dddb0", 00:12:37.770 "strip_size_kb": 0, 00:12:37.770 "state": "online", 00:12:37.770 "raid_level": "raid1", 00:12:37.770 "superblock": false, 00:12:37.770 "num_base_bdevs": 4, 00:12:37.770 "num_base_bdevs_discovered": 3, 00:12:37.770 "num_base_bdevs_operational": 3, 00:12:37.770 "base_bdevs_list": [ 00:12:37.770 { 00:12:37.770 "name": null, 00:12:37.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.770 "is_configured": false, 00:12:37.770 "data_offset": 0, 00:12:37.770 "data_size": 65536 00:12:37.770 }, 00:12:37.770 { 00:12:37.770 "name": "BaseBdev2", 00:12:37.770 "uuid": "86b1df21-e4c4-536d-a9d9-413855b5a1fb", 00:12:37.770 "is_configured": true, 00:12:37.770 "data_offset": 0, 00:12:37.770 "data_size": 65536 00:12:37.770 }, 00:12:37.770 { 00:12:37.770 "name": "BaseBdev3", 00:12:37.770 "uuid": "991e88b0-0aa5-5d90-b2b4-8a91af340c8e", 00:12:37.770 "is_configured": true, 00:12:37.770 "data_offset": 0, 00:12:37.770 "data_size": 65536 00:12:37.770 }, 00:12:37.770 { 00:12:37.770 "name": "BaseBdev4", 00:12:37.770 "uuid": "9e5183e5-49a8-5384-9a1c-7534a7f9fb9f", 00:12:37.770 "is_configured": true, 00:12:37.770 "data_offset": 0, 00:12:37.770 "data_size": 65536 00:12:37.770 } 00:12:37.770 ] 00:12:37.770 }' 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.770 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.029 "name": "raid_bdev1", 00:12:38.029 "uuid": "257241a4-f8ed-4ebb-abb9-cc58689dddb0", 00:12:38.029 "strip_size_kb": 0, 00:12:38.029 "state": "online", 00:12:38.029 "raid_level": "raid1", 00:12:38.029 "superblock": false, 00:12:38.029 "num_base_bdevs": 4, 00:12:38.029 "num_base_bdevs_discovered": 3, 00:12:38.029 "num_base_bdevs_operational": 3, 00:12:38.029 "base_bdevs_list": [ 00:12:38.029 { 00:12:38.029 "name": null, 00:12:38.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.029 "is_configured": false, 00:12:38.029 "data_offset": 0, 00:12:38.029 "data_size": 65536 00:12:38.029 }, 00:12:38.029 { 00:12:38.029 "name": "BaseBdev2", 00:12:38.029 "uuid": "86b1df21-e4c4-536d-a9d9-413855b5a1fb", 00:12:38.029 "is_configured": true, 00:12:38.029 "data_offset": 0, 00:12:38.029 "data_size": 65536 00:12:38.029 }, 00:12:38.029 { 00:12:38.029 "name": "BaseBdev3", 00:12:38.029 "uuid": "991e88b0-0aa5-5d90-b2b4-8a91af340c8e", 00:12:38.029 "is_configured": true, 00:12:38.029 "data_offset": 0, 00:12:38.029 "data_size": 65536 00:12:38.029 }, 00:12:38.029 { 00:12:38.029 "name": "BaseBdev4", 00:12:38.029 "uuid": "9e5183e5-49a8-5384-9a1c-7534a7f9fb9f", 00:12:38.029 "is_configured": true, 00:12:38.029 "data_offset": 0, 00:12:38.029 "data_size": 65536 00:12:38.029 } 00:12:38.029 ] 00:12:38.029 }' 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.029 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.289 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.289 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:38.289 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.289 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.289 [2024-12-07 16:38:36.948693] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:38.289 [2024-12-07 16:38:36.954632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:38.289 16:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.289 16:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:38.289 [2024-12-07 16:38:36.956917] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:39.228 16:38:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.228 16:38:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.228 16:38:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.228 16:38:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.228 16:38:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.228 16:38:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.228 16:38:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.228 16:38:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.228 16:38:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.228 16:38:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.228 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.228 "name": "raid_bdev1", 00:12:39.228 "uuid": "257241a4-f8ed-4ebb-abb9-cc58689dddb0", 00:12:39.228 "strip_size_kb": 0, 00:12:39.228 "state": "online", 00:12:39.228 "raid_level": "raid1", 00:12:39.228 "superblock": false, 00:12:39.228 "num_base_bdevs": 4, 00:12:39.228 "num_base_bdevs_discovered": 4, 00:12:39.228 "num_base_bdevs_operational": 4, 00:12:39.228 "process": { 00:12:39.228 "type": "rebuild", 00:12:39.228 "target": "spare", 00:12:39.228 "progress": { 00:12:39.228 "blocks": 20480, 00:12:39.228 "percent": 31 00:12:39.228 } 00:12:39.228 }, 00:12:39.228 "base_bdevs_list": [ 00:12:39.228 { 00:12:39.228 "name": "spare", 00:12:39.228 "uuid": "8633df79-02cd-5118-ae19-3d44ae954398", 00:12:39.228 "is_configured": true, 00:12:39.228 "data_offset": 0, 00:12:39.228 "data_size": 65536 00:12:39.228 }, 00:12:39.228 { 00:12:39.228 "name": "BaseBdev2", 00:12:39.228 "uuid": "86b1df21-e4c4-536d-a9d9-413855b5a1fb", 00:12:39.228 "is_configured": true, 00:12:39.228 "data_offset": 0, 00:12:39.228 "data_size": 65536 00:12:39.228 }, 00:12:39.228 { 00:12:39.228 "name": "BaseBdev3", 00:12:39.228 "uuid": "991e88b0-0aa5-5d90-b2b4-8a91af340c8e", 00:12:39.228 "is_configured": true, 00:12:39.228 "data_offset": 0, 00:12:39.228 "data_size": 65536 00:12:39.228 }, 00:12:39.228 { 00:12:39.228 "name": "BaseBdev4", 00:12:39.228 "uuid": "9e5183e5-49a8-5384-9a1c-7534a7f9fb9f", 00:12:39.228 "is_configured": true, 00:12:39.228 "data_offset": 0, 00:12:39.228 "data_size": 65536 00:12:39.228 } 00:12:39.228 ] 00:12:39.228 }' 00:12:39.228 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.228 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.228 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.228 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.228 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:39.228 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:39.228 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:39.228 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:39.228 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:39.228 16:38:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.228 16:38:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.228 [2024-12-07 16:38:38.108975] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:39.488 [2024-12-07 16:38:38.165679] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.488 "name": "raid_bdev1", 00:12:39.488 "uuid": "257241a4-f8ed-4ebb-abb9-cc58689dddb0", 00:12:39.488 "strip_size_kb": 0, 00:12:39.488 "state": "online", 00:12:39.488 "raid_level": "raid1", 00:12:39.488 "superblock": false, 00:12:39.488 "num_base_bdevs": 4, 00:12:39.488 "num_base_bdevs_discovered": 3, 00:12:39.488 "num_base_bdevs_operational": 3, 00:12:39.488 "process": { 00:12:39.488 "type": "rebuild", 00:12:39.488 "target": "spare", 00:12:39.488 "progress": { 00:12:39.488 "blocks": 24576, 00:12:39.488 "percent": 37 00:12:39.488 } 00:12:39.488 }, 00:12:39.488 "base_bdevs_list": [ 00:12:39.488 { 00:12:39.488 "name": "spare", 00:12:39.488 "uuid": "8633df79-02cd-5118-ae19-3d44ae954398", 00:12:39.488 "is_configured": true, 00:12:39.488 "data_offset": 0, 00:12:39.488 "data_size": 65536 00:12:39.488 }, 00:12:39.488 { 00:12:39.488 "name": null, 00:12:39.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.488 "is_configured": false, 00:12:39.488 "data_offset": 0, 00:12:39.488 "data_size": 65536 00:12:39.488 }, 00:12:39.488 { 00:12:39.488 "name": "BaseBdev3", 00:12:39.488 "uuid": "991e88b0-0aa5-5d90-b2b4-8a91af340c8e", 00:12:39.488 "is_configured": true, 00:12:39.488 "data_offset": 0, 00:12:39.488 "data_size": 65536 00:12:39.488 }, 00:12:39.488 { 00:12:39.488 "name": "BaseBdev4", 00:12:39.488 "uuid": "9e5183e5-49a8-5384-9a1c-7534a7f9fb9f", 00:12:39.488 "is_configured": true, 00:12:39.488 "data_offset": 0, 00:12:39.488 "data_size": 65536 00:12:39.488 } 00:12:39.488 ] 00:12:39.488 }' 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=370 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.488 "name": "raid_bdev1", 00:12:39.488 "uuid": "257241a4-f8ed-4ebb-abb9-cc58689dddb0", 00:12:39.488 "strip_size_kb": 0, 00:12:39.488 "state": "online", 00:12:39.488 "raid_level": "raid1", 00:12:39.488 "superblock": false, 00:12:39.488 "num_base_bdevs": 4, 00:12:39.488 "num_base_bdevs_discovered": 3, 00:12:39.488 "num_base_bdevs_operational": 3, 00:12:39.488 "process": { 00:12:39.488 "type": "rebuild", 00:12:39.488 "target": "spare", 00:12:39.488 "progress": { 00:12:39.488 "blocks": 26624, 00:12:39.488 "percent": 40 00:12:39.488 } 00:12:39.488 }, 00:12:39.488 "base_bdevs_list": [ 00:12:39.488 { 00:12:39.488 "name": "spare", 00:12:39.488 "uuid": "8633df79-02cd-5118-ae19-3d44ae954398", 00:12:39.488 "is_configured": true, 00:12:39.488 "data_offset": 0, 00:12:39.488 "data_size": 65536 00:12:39.488 }, 00:12:39.488 { 00:12:39.488 "name": null, 00:12:39.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.488 "is_configured": false, 00:12:39.488 "data_offset": 0, 00:12:39.488 "data_size": 65536 00:12:39.488 }, 00:12:39.488 { 00:12:39.488 "name": "BaseBdev3", 00:12:39.488 "uuid": "991e88b0-0aa5-5d90-b2b4-8a91af340c8e", 00:12:39.488 "is_configured": true, 00:12:39.488 "data_offset": 0, 00:12:39.488 "data_size": 65536 00:12:39.488 }, 00:12:39.488 { 00:12:39.488 "name": "BaseBdev4", 00:12:39.488 "uuid": "9e5183e5-49a8-5384-9a1c-7534a7f9fb9f", 00:12:39.488 "is_configured": true, 00:12:39.488 "data_offset": 0, 00:12:39.488 "data_size": 65536 00:12:39.488 } 00:12:39.488 ] 00:12:39.488 }' 00:12:39.488 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.749 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.749 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.749 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.749 16:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:40.689 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:40.689 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.689 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.689 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.689 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.689 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.689 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.689 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.689 16:38:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.689 16:38:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.689 16:38:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.689 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.689 "name": "raid_bdev1", 00:12:40.689 "uuid": "257241a4-f8ed-4ebb-abb9-cc58689dddb0", 00:12:40.689 "strip_size_kb": 0, 00:12:40.689 "state": "online", 00:12:40.689 "raid_level": "raid1", 00:12:40.689 "superblock": false, 00:12:40.689 "num_base_bdevs": 4, 00:12:40.689 "num_base_bdevs_discovered": 3, 00:12:40.689 "num_base_bdevs_operational": 3, 00:12:40.689 "process": { 00:12:40.689 "type": "rebuild", 00:12:40.689 "target": "spare", 00:12:40.689 "progress": { 00:12:40.689 "blocks": 51200, 00:12:40.689 "percent": 78 00:12:40.689 } 00:12:40.689 }, 00:12:40.689 "base_bdevs_list": [ 00:12:40.689 { 00:12:40.689 "name": "spare", 00:12:40.689 "uuid": "8633df79-02cd-5118-ae19-3d44ae954398", 00:12:40.689 "is_configured": true, 00:12:40.689 "data_offset": 0, 00:12:40.689 "data_size": 65536 00:12:40.689 }, 00:12:40.689 { 00:12:40.689 "name": null, 00:12:40.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.689 "is_configured": false, 00:12:40.689 "data_offset": 0, 00:12:40.689 "data_size": 65536 00:12:40.690 }, 00:12:40.690 { 00:12:40.690 "name": "BaseBdev3", 00:12:40.690 "uuid": "991e88b0-0aa5-5d90-b2b4-8a91af340c8e", 00:12:40.690 "is_configured": true, 00:12:40.690 "data_offset": 0, 00:12:40.690 "data_size": 65536 00:12:40.690 }, 00:12:40.690 { 00:12:40.690 "name": "BaseBdev4", 00:12:40.690 "uuid": "9e5183e5-49a8-5384-9a1c-7534a7f9fb9f", 00:12:40.690 "is_configured": true, 00:12:40.690 "data_offset": 0, 00:12:40.690 "data_size": 65536 00:12:40.690 } 00:12:40.690 ] 00:12:40.690 }' 00:12:40.690 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.690 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.690 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.950 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.950 16:38:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:41.519 [2024-12-07 16:38:40.179914] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:41.519 [2024-12-07 16:38:40.180021] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:41.519 [2024-12-07 16:38:40.180071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.781 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.781 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.781 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.781 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.781 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.781 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.781 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.781 16:38:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.781 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.781 16:38:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.781 16:38:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.041 "name": "raid_bdev1", 00:12:42.041 "uuid": "257241a4-f8ed-4ebb-abb9-cc58689dddb0", 00:12:42.041 "strip_size_kb": 0, 00:12:42.041 "state": "online", 00:12:42.041 "raid_level": "raid1", 00:12:42.041 "superblock": false, 00:12:42.041 "num_base_bdevs": 4, 00:12:42.041 "num_base_bdevs_discovered": 3, 00:12:42.041 "num_base_bdevs_operational": 3, 00:12:42.041 "base_bdevs_list": [ 00:12:42.041 { 00:12:42.041 "name": "spare", 00:12:42.041 "uuid": "8633df79-02cd-5118-ae19-3d44ae954398", 00:12:42.041 "is_configured": true, 00:12:42.041 "data_offset": 0, 00:12:42.041 "data_size": 65536 00:12:42.041 }, 00:12:42.041 { 00:12:42.041 "name": null, 00:12:42.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.041 "is_configured": false, 00:12:42.041 "data_offset": 0, 00:12:42.041 "data_size": 65536 00:12:42.041 }, 00:12:42.041 { 00:12:42.041 "name": "BaseBdev3", 00:12:42.041 "uuid": "991e88b0-0aa5-5d90-b2b4-8a91af340c8e", 00:12:42.041 "is_configured": true, 00:12:42.041 "data_offset": 0, 00:12:42.041 "data_size": 65536 00:12:42.041 }, 00:12:42.041 { 00:12:42.041 "name": "BaseBdev4", 00:12:42.041 "uuid": "9e5183e5-49a8-5384-9a1c-7534a7f9fb9f", 00:12:42.041 "is_configured": true, 00:12:42.041 "data_offset": 0, 00:12:42.041 "data_size": 65536 00:12:42.041 } 00:12:42.041 ] 00:12:42.041 }' 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.041 "name": "raid_bdev1", 00:12:42.041 "uuid": "257241a4-f8ed-4ebb-abb9-cc58689dddb0", 00:12:42.041 "strip_size_kb": 0, 00:12:42.041 "state": "online", 00:12:42.041 "raid_level": "raid1", 00:12:42.041 "superblock": false, 00:12:42.041 "num_base_bdevs": 4, 00:12:42.041 "num_base_bdevs_discovered": 3, 00:12:42.041 "num_base_bdevs_operational": 3, 00:12:42.041 "base_bdevs_list": [ 00:12:42.041 { 00:12:42.041 "name": "spare", 00:12:42.041 "uuid": "8633df79-02cd-5118-ae19-3d44ae954398", 00:12:42.041 "is_configured": true, 00:12:42.041 "data_offset": 0, 00:12:42.041 "data_size": 65536 00:12:42.041 }, 00:12:42.041 { 00:12:42.041 "name": null, 00:12:42.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.041 "is_configured": false, 00:12:42.041 "data_offset": 0, 00:12:42.041 "data_size": 65536 00:12:42.041 }, 00:12:42.041 { 00:12:42.041 "name": "BaseBdev3", 00:12:42.041 "uuid": "991e88b0-0aa5-5d90-b2b4-8a91af340c8e", 00:12:42.041 "is_configured": true, 00:12:42.041 "data_offset": 0, 00:12:42.041 "data_size": 65536 00:12:42.041 }, 00:12:42.041 { 00:12:42.041 "name": "BaseBdev4", 00:12:42.041 "uuid": "9e5183e5-49a8-5384-9a1c-7534a7f9fb9f", 00:12:42.041 "is_configured": true, 00:12:42.041 "data_offset": 0, 00:12:42.041 "data_size": 65536 00:12:42.041 } 00:12:42.041 ] 00:12:42.041 }' 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.041 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.042 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.042 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.042 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.042 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.042 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.042 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.042 16:38:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.042 16:38:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.042 16:38:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.042 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.042 "name": "raid_bdev1", 00:12:42.042 "uuid": "257241a4-f8ed-4ebb-abb9-cc58689dddb0", 00:12:42.042 "strip_size_kb": 0, 00:12:42.042 "state": "online", 00:12:42.042 "raid_level": "raid1", 00:12:42.042 "superblock": false, 00:12:42.042 "num_base_bdevs": 4, 00:12:42.042 "num_base_bdevs_discovered": 3, 00:12:42.042 "num_base_bdevs_operational": 3, 00:12:42.042 "base_bdevs_list": [ 00:12:42.042 { 00:12:42.042 "name": "spare", 00:12:42.042 "uuid": "8633df79-02cd-5118-ae19-3d44ae954398", 00:12:42.042 "is_configured": true, 00:12:42.042 "data_offset": 0, 00:12:42.042 "data_size": 65536 00:12:42.042 }, 00:12:42.042 { 00:12:42.042 "name": null, 00:12:42.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.042 "is_configured": false, 00:12:42.042 "data_offset": 0, 00:12:42.042 "data_size": 65536 00:12:42.042 }, 00:12:42.042 { 00:12:42.042 "name": "BaseBdev3", 00:12:42.042 "uuid": "991e88b0-0aa5-5d90-b2b4-8a91af340c8e", 00:12:42.042 "is_configured": true, 00:12:42.042 "data_offset": 0, 00:12:42.042 "data_size": 65536 00:12:42.042 }, 00:12:42.042 { 00:12:42.042 "name": "BaseBdev4", 00:12:42.042 "uuid": "9e5183e5-49a8-5384-9a1c-7534a7f9fb9f", 00:12:42.042 "is_configured": true, 00:12:42.042 "data_offset": 0, 00:12:42.042 "data_size": 65536 00:12:42.042 } 00:12:42.042 ] 00:12:42.042 }' 00:12:42.042 16:38:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.042 16:38:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.611 16:38:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:42.611 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.611 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.611 [2024-12-07 16:38:41.292868] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:42.611 [2024-12-07 16:38:41.292901] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.611 [2024-12-07 16:38:41.293026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.611 [2024-12-07 16:38:41.293122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.611 [2024-12-07 16:38:41.293136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:42.611 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.611 16:38:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.611 16:38:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:42.611 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.611 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.611 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.611 16:38:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:42.611 16:38:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:42.611 16:38:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:42.611 16:38:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:42.612 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.612 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:42.612 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:42.612 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.612 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:42.612 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:42.612 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:42.612 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.612 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:42.871 /dev/nbd0 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.871 1+0 records in 00:12:42.871 1+0 records out 00:12:42.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327403 s, 12.5 MB/s 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.871 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:43.131 /dev/nbd1 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.131 1+0 records in 00:12:43.131 1+0 records out 00:12:43.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402715 s, 10.2 MB/s 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.131 16:38:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:43.391 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:43.391 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:43.391 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:43.391 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.391 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.391 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:43.391 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:43.391 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.391 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.391 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:43.650 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:43.650 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:43.650 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:43.650 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.650 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.650 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:43.650 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:43.650 16:38:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.650 16:38:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:43.650 16:38:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88510 00:12:43.650 16:38:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88510 ']' 00:12:43.651 16:38:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88510 00:12:43.651 16:38:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:43.651 16:38:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.651 16:38:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88510 00:12:43.651 16:38:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:43.651 16:38:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:43.651 killing process with pid 88510 00:12:43.651 16:38:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88510' 00:12:43.651 Received shutdown signal, test time was about 60.000000 seconds 00:12:43.651 00:12:43.651 Latency(us) 00:12:43.651 [2024-12-07T16:38:42.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.651 [2024-12-07T16:38:42.550Z] =================================================================================================================== 00:12:43.651 [2024-12-07T16:38:42.550Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:43.651 16:38:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88510 00:12:43.651 16:38:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88510 00:12:43.651 [2024-12-07 16:38:42.387494] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.651 [2024-12-07 16:38:42.484932] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:44.219 16:38:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:44.219 00:12:44.219 real 0m15.517s 00:12:44.219 user 0m17.388s 00:12:44.219 sys 0m3.186s 00:12:44.219 16:38:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.219 16:38:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 ************************************ 00:12:44.219 END TEST raid_rebuild_test 00:12:44.219 ************************************ 00:12:44.219 16:38:42 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:44.219 16:38:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:44.219 16:38:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.220 16:38:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.220 ************************************ 00:12:44.220 START TEST raid_rebuild_test_sb 00:12:44.220 ************************************ 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88935 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88935 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88935 ']' 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:44.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:44.220 16:38:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.220 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:44.220 Zero copy mechanism will not be used. 00:12:44.220 [2024-12-07 16:38:43.023621] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:44.220 [2024-12-07 16:38:43.023754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88935 ] 00:12:44.479 [2024-12-07 16:38:43.187599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.479 [2024-12-07 16:38:43.258467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.479 [2024-12-07 16:38:43.336306] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.479 [2024-12-07 16:38:43.336362] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.046 BaseBdev1_malloc 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.046 [2024-12-07 16:38:43.875602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:45.046 [2024-12-07 16:38:43.875678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.046 [2024-12-07 16:38:43.875708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:45.046 [2024-12-07 16:38:43.875726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.046 [2024-12-07 16:38:43.878193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.046 [2024-12-07 16:38:43.878242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:45.046 BaseBdev1 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.046 BaseBdev2_malloc 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.046 [2024-12-07 16:38:43.920700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:45.046 [2024-12-07 16:38:43.920751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.046 [2024-12-07 16:38:43.920774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:45.046 [2024-12-07 16:38:43.920783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.046 [2024-12-07 16:38:43.923109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.046 [2024-12-07 16:38:43.923142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:45.046 BaseBdev2 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.046 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.307 BaseBdev3_malloc 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.307 [2024-12-07 16:38:43.955666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:45.307 [2024-12-07 16:38:43.955718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.307 [2024-12-07 16:38:43.955748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:45.307 [2024-12-07 16:38:43.955757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.307 [2024-12-07 16:38:43.958095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.307 [2024-12-07 16:38:43.958127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:45.307 BaseBdev3 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.307 BaseBdev4_malloc 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.307 [2024-12-07 16:38:43.990728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:45.307 [2024-12-07 16:38:43.990787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.307 [2024-12-07 16:38:43.990814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:45.307 [2024-12-07 16:38:43.990823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.307 [2024-12-07 16:38:43.993304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.307 [2024-12-07 16:38:43.993335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:45.307 BaseBdev4 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.307 16:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.307 spare_malloc 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.307 spare_delay 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.307 [2024-12-07 16:38:44.029951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:45.307 [2024-12-07 16:38:44.030007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.307 [2024-12-07 16:38:44.030031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:45.307 [2024-12-07 16:38:44.030041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.307 [2024-12-07 16:38:44.032577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.307 [2024-12-07 16:38:44.032611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:45.307 spare 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.307 [2024-12-07 16:38:44.038028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.307 [2024-12-07 16:38:44.040179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.307 [2024-12-07 16:38:44.040261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:45.307 [2024-12-07 16:38:44.040308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:45.307 [2024-12-07 16:38:44.040510] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:45.307 [2024-12-07 16:38:44.040529] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:45.307 [2024-12-07 16:38:44.040798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:45.307 [2024-12-07 16:38:44.040978] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:45.307 [2024-12-07 16:38:44.040994] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:45.307 [2024-12-07 16:38:44.041142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.307 "name": "raid_bdev1", 00:12:45.307 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:45.307 "strip_size_kb": 0, 00:12:45.307 "state": "online", 00:12:45.307 "raid_level": "raid1", 00:12:45.307 "superblock": true, 00:12:45.307 "num_base_bdevs": 4, 00:12:45.307 "num_base_bdevs_discovered": 4, 00:12:45.307 "num_base_bdevs_operational": 4, 00:12:45.307 "base_bdevs_list": [ 00:12:45.307 { 00:12:45.307 "name": "BaseBdev1", 00:12:45.307 "uuid": "f447f7af-3485-549e-be80-0188f905e124", 00:12:45.307 "is_configured": true, 00:12:45.307 "data_offset": 2048, 00:12:45.307 "data_size": 63488 00:12:45.307 }, 00:12:45.307 { 00:12:45.307 "name": "BaseBdev2", 00:12:45.307 "uuid": "13f9e3c7-0940-58b2-9371-21feb013e495", 00:12:45.307 "is_configured": true, 00:12:45.307 "data_offset": 2048, 00:12:45.307 "data_size": 63488 00:12:45.307 }, 00:12:45.307 { 00:12:45.307 "name": "BaseBdev3", 00:12:45.307 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:45.307 "is_configured": true, 00:12:45.307 "data_offset": 2048, 00:12:45.307 "data_size": 63488 00:12:45.307 }, 00:12:45.307 { 00:12:45.307 "name": "BaseBdev4", 00:12:45.307 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:45.307 "is_configured": true, 00:12:45.307 "data_offset": 2048, 00:12:45.307 "data_size": 63488 00:12:45.307 } 00:12:45.307 ] 00:12:45.307 }' 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.307 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.876 [2024-12-07 16:38:44.501591] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:45.876 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:45.876 [2024-12-07 16:38:44.764858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:46.136 /dev/nbd0 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.136 1+0 records in 00:12:46.136 1+0 records out 00:12:46.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004391 s, 9.3 MB/s 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:46.136 16:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:51.410 63488+0 records in 00:12:51.410 63488+0 records out 00:12:51.410 32505856 bytes (33 MB, 31 MiB) copied, 4.85392 s, 6.7 MB/s 00:12:51.410 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:51.410 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.410 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:51.411 [2024-12-07 16:38:49.906523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.411 [2024-12-07 16:38:49.922574] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.411 "name": "raid_bdev1", 00:12:51.411 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:51.411 "strip_size_kb": 0, 00:12:51.411 "state": "online", 00:12:51.411 "raid_level": "raid1", 00:12:51.411 "superblock": true, 00:12:51.411 "num_base_bdevs": 4, 00:12:51.411 "num_base_bdevs_discovered": 3, 00:12:51.411 "num_base_bdevs_operational": 3, 00:12:51.411 "base_bdevs_list": [ 00:12:51.411 { 00:12:51.411 "name": null, 00:12:51.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.411 "is_configured": false, 00:12:51.411 "data_offset": 0, 00:12:51.411 "data_size": 63488 00:12:51.411 }, 00:12:51.411 { 00:12:51.411 "name": "BaseBdev2", 00:12:51.411 "uuid": "13f9e3c7-0940-58b2-9371-21feb013e495", 00:12:51.411 "is_configured": true, 00:12:51.411 "data_offset": 2048, 00:12:51.411 "data_size": 63488 00:12:51.411 }, 00:12:51.411 { 00:12:51.411 "name": "BaseBdev3", 00:12:51.411 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:51.411 "is_configured": true, 00:12:51.411 "data_offset": 2048, 00:12:51.411 "data_size": 63488 00:12:51.411 }, 00:12:51.411 { 00:12:51.411 "name": "BaseBdev4", 00:12:51.411 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:51.411 "is_configured": true, 00:12:51.411 "data_offset": 2048, 00:12:51.411 "data_size": 63488 00:12:51.411 } 00:12:51.411 ] 00:12:51.411 }' 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.411 16:38:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.668 16:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:51.668 16:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.669 16:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.669 [2024-12-07 16:38:50.385821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.669 [2024-12-07 16:38:50.391754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:51.669 16:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.669 16:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:51.669 [2024-12-07 16:38:50.394033] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:52.606 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.606 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.606 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.606 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.606 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.606 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.606 16:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.606 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.607 16:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.607 16:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.607 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.607 "name": "raid_bdev1", 00:12:52.607 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:52.607 "strip_size_kb": 0, 00:12:52.607 "state": "online", 00:12:52.607 "raid_level": "raid1", 00:12:52.607 "superblock": true, 00:12:52.607 "num_base_bdevs": 4, 00:12:52.607 "num_base_bdevs_discovered": 4, 00:12:52.607 "num_base_bdevs_operational": 4, 00:12:52.607 "process": { 00:12:52.607 "type": "rebuild", 00:12:52.607 "target": "spare", 00:12:52.607 "progress": { 00:12:52.607 "blocks": 20480, 00:12:52.607 "percent": 32 00:12:52.607 } 00:12:52.607 }, 00:12:52.607 "base_bdevs_list": [ 00:12:52.607 { 00:12:52.607 "name": "spare", 00:12:52.607 "uuid": "9c0b5814-5222-53f0-94d5-466c80df0d24", 00:12:52.607 "is_configured": true, 00:12:52.607 "data_offset": 2048, 00:12:52.607 "data_size": 63488 00:12:52.607 }, 00:12:52.607 { 00:12:52.607 "name": "BaseBdev2", 00:12:52.607 "uuid": "13f9e3c7-0940-58b2-9371-21feb013e495", 00:12:52.607 "is_configured": true, 00:12:52.607 "data_offset": 2048, 00:12:52.607 "data_size": 63488 00:12:52.607 }, 00:12:52.607 { 00:12:52.607 "name": "BaseBdev3", 00:12:52.607 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:52.607 "is_configured": true, 00:12:52.607 "data_offset": 2048, 00:12:52.607 "data_size": 63488 00:12:52.607 }, 00:12:52.607 { 00:12:52.607 "name": "BaseBdev4", 00:12:52.607 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:52.607 "is_configured": true, 00:12:52.607 "data_offset": 2048, 00:12:52.607 "data_size": 63488 00:12:52.607 } 00:12:52.607 ] 00:12:52.607 }' 00:12:52.607 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.607 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.607 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.866 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.867 [2024-12-07 16:38:51.537850] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.867 [2024-12-07 16:38:51.602579] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:52.867 [2024-12-07 16:38:51.602638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.867 [2024-12-07 16:38:51.602659] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.867 [2024-12-07 16:38:51.602666] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.867 "name": "raid_bdev1", 00:12:52.867 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:52.867 "strip_size_kb": 0, 00:12:52.867 "state": "online", 00:12:52.867 "raid_level": "raid1", 00:12:52.867 "superblock": true, 00:12:52.867 "num_base_bdevs": 4, 00:12:52.867 "num_base_bdevs_discovered": 3, 00:12:52.867 "num_base_bdevs_operational": 3, 00:12:52.867 "base_bdevs_list": [ 00:12:52.867 { 00:12:52.867 "name": null, 00:12:52.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.867 "is_configured": false, 00:12:52.867 "data_offset": 0, 00:12:52.867 "data_size": 63488 00:12:52.867 }, 00:12:52.867 { 00:12:52.867 "name": "BaseBdev2", 00:12:52.867 "uuid": "13f9e3c7-0940-58b2-9371-21feb013e495", 00:12:52.867 "is_configured": true, 00:12:52.867 "data_offset": 2048, 00:12:52.867 "data_size": 63488 00:12:52.867 }, 00:12:52.867 { 00:12:52.867 "name": "BaseBdev3", 00:12:52.867 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:52.867 "is_configured": true, 00:12:52.867 "data_offset": 2048, 00:12:52.867 "data_size": 63488 00:12:52.867 }, 00:12:52.867 { 00:12:52.867 "name": "BaseBdev4", 00:12:52.867 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:52.867 "is_configured": true, 00:12:52.867 "data_offset": 2048, 00:12:52.867 "data_size": 63488 00:12:52.867 } 00:12:52.867 ] 00:12:52.867 }' 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.867 16:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.436 "name": "raid_bdev1", 00:12:53.436 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:53.436 "strip_size_kb": 0, 00:12:53.436 "state": "online", 00:12:53.436 "raid_level": "raid1", 00:12:53.436 "superblock": true, 00:12:53.436 "num_base_bdevs": 4, 00:12:53.436 "num_base_bdevs_discovered": 3, 00:12:53.436 "num_base_bdevs_operational": 3, 00:12:53.436 "base_bdevs_list": [ 00:12:53.436 { 00:12:53.436 "name": null, 00:12:53.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.436 "is_configured": false, 00:12:53.436 "data_offset": 0, 00:12:53.436 "data_size": 63488 00:12:53.436 }, 00:12:53.436 { 00:12:53.436 "name": "BaseBdev2", 00:12:53.436 "uuid": "13f9e3c7-0940-58b2-9371-21feb013e495", 00:12:53.436 "is_configured": true, 00:12:53.436 "data_offset": 2048, 00:12:53.436 "data_size": 63488 00:12:53.436 }, 00:12:53.436 { 00:12:53.436 "name": "BaseBdev3", 00:12:53.436 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:53.436 "is_configured": true, 00:12:53.436 "data_offset": 2048, 00:12:53.436 "data_size": 63488 00:12:53.436 }, 00:12:53.436 { 00:12:53.436 "name": "BaseBdev4", 00:12:53.436 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:53.436 "is_configured": true, 00:12:53.436 "data_offset": 2048, 00:12:53.436 "data_size": 63488 00:12:53.436 } 00:12:53.436 ] 00:12:53.436 }' 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.436 [2024-12-07 16:38:52.224607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.436 [2024-12-07 16:38:52.230482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.436 16:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:53.436 [2024-12-07 16:38:52.232764] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:54.373 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.373 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.373 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.374 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.374 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.374 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.374 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.374 16:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.374 16:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.374 16:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.633 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.633 "name": "raid_bdev1", 00:12:54.633 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:54.633 "strip_size_kb": 0, 00:12:54.633 "state": "online", 00:12:54.633 "raid_level": "raid1", 00:12:54.633 "superblock": true, 00:12:54.633 "num_base_bdevs": 4, 00:12:54.633 "num_base_bdevs_discovered": 4, 00:12:54.633 "num_base_bdevs_operational": 4, 00:12:54.633 "process": { 00:12:54.633 "type": "rebuild", 00:12:54.633 "target": "spare", 00:12:54.633 "progress": { 00:12:54.633 "blocks": 20480, 00:12:54.633 "percent": 32 00:12:54.633 } 00:12:54.633 }, 00:12:54.633 "base_bdevs_list": [ 00:12:54.633 { 00:12:54.633 "name": "spare", 00:12:54.633 "uuid": "9c0b5814-5222-53f0-94d5-466c80df0d24", 00:12:54.633 "is_configured": true, 00:12:54.633 "data_offset": 2048, 00:12:54.633 "data_size": 63488 00:12:54.633 }, 00:12:54.633 { 00:12:54.633 "name": "BaseBdev2", 00:12:54.633 "uuid": "13f9e3c7-0940-58b2-9371-21feb013e495", 00:12:54.633 "is_configured": true, 00:12:54.633 "data_offset": 2048, 00:12:54.633 "data_size": 63488 00:12:54.633 }, 00:12:54.633 { 00:12:54.633 "name": "BaseBdev3", 00:12:54.633 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:54.633 "is_configured": true, 00:12:54.633 "data_offset": 2048, 00:12:54.633 "data_size": 63488 00:12:54.633 }, 00:12:54.633 { 00:12:54.633 "name": "BaseBdev4", 00:12:54.633 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:54.633 "is_configured": true, 00:12:54.633 "data_offset": 2048, 00:12:54.633 "data_size": 63488 00:12:54.633 } 00:12:54.633 ] 00:12:54.633 }' 00:12:54.633 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.633 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.633 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.633 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.633 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:54.633 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:54.633 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:54.633 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:54.633 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:54.633 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:54.633 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:54.633 16:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.633 16:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.633 [2024-12-07 16:38:53.400631] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:54.894 [2024-12-07 16:38:53.541246] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.894 "name": "raid_bdev1", 00:12:54.894 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:54.894 "strip_size_kb": 0, 00:12:54.894 "state": "online", 00:12:54.894 "raid_level": "raid1", 00:12:54.894 "superblock": true, 00:12:54.894 "num_base_bdevs": 4, 00:12:54.894 "num_base_bdevs_discovered": 3, 00:12:54.894 "num_base_bdevs_operational": 3, 00:12:54.894 "process": { 00:12:54.894 "type": "rebuild", 00:12:54.894 "target": "spare", 00:12:54.894 "progress": { 00:12:54.894 "blocks": 24576, 00:12:54.894 "percent": 38 00:12:54.894 } 00:12:54.894 }, 00:12:54.894 "base_bdevs_list": [ 00:12:54.894 { 00:12:54.894 "name": "spare", 00:12:54.894 "uuid": "9c0b5814-5222-53f0-94d5-466c80df0d24", 00:12:54.894 "is_configured": true, 00:12:54.894 "data_offset": 2048, 00:12:54.894 "data_size": 63488 00:12:54.894 }, 00:12:54.894 { 00:12:54.894 "name": null, 00:12:54.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.894 "is_configured": false, 00:12:54.894 "data_offset": 0, 00:12:54.894 "data_size": 63488 00:12:54.894 }, 00:12:54.894 { 00:12:54.894 "name": "BaseBdev3", 00:12:54.894 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:54.894 "is_configured": true, 00:12:54.894 "data_offset": 2048, 00:12:54.894 "data_size": 63488 00:12:54.894 }, 00:12:54.894 { 00:12:54.894 "name": "BaseBdev4", 00:12:54.894 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:54.894 "is_configured": true, 00:12:54.894 "data_offset": 2048, 00:12:54.894 "data_size": 63488 00:12:54.894 } 00:12:54.894 ] 00:12:54.894 }' 00:12:54.894 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=385 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.895 "name": "raid_bdev1", 00:12:54.895 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:54.895 "strip_size_kb": 0, 00:12:54.895 "state": "online", 00:12:54.895 "raid_level": "raid1", 00:12:54.895 "superblock": true, 00:12:54.895 "num_base_bdevs": 4, 00:12:54.895 "num_base_bdevs_discovered": 3, 00:12:54.895 "num_base_bdevs_operational": 3, 00:12:54.895 "process": { 00:12:54.895 "type": "rebuild", 00:12:54.895 "target": "spare", 00:12:54.895 "progress": { 00:12:54.895 "blocks": 26624, 00:12:54.895 "percent": 41 00:12:54.895 } 00:12:54.895 }, 00:12:54.895 "base_bdevs_list": [ 00:12:54.895 { 00:12:54.895 "name": "spare", 00:12:54.895 "uuid": "9c0b5814-5222-53f0-94d5-466c80df0d24", 00:12:54.895 "is_configured": true, 00:12:54.895 "data_offset": 2048, 00:12:54.895 "data_size": 63488 00:12:54.895 }, 00:12:54.895 { 00:12:54.895 "name": null, 00:12:54.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.895 "is_configured": false, 00:12:54.895 "data_offset": 0, 00:12:54.895 "data_size": 63488 00:12:54.895 }, 00:12:54.895 { 00:12:54.895 "name": "BaseBdev3", 00:12:54.895 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:54.895 "is_configured": true, 00:12:54.895 "data_offset": 2048, 00:12:54.895 "data_size": 63488 00:12:54.895 }, 00:12:54.895 { 00:12:54.895 "name": "BaseBdev4", 00:12:54.895 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:54.895 "is_configured": true, 00:12:54.895 "data_offset": 2048, 00:12:54.895 "data_size": 63488 00:12:54.895 } 00:12:54.895 ] 00:12:54.895 }' 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.895 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.155 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.155 16:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.096 "name": "raid_bdev1", 00:12:56.096 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:56.096 "strip_size_kb": 0, 00:12:56.096 "state": "online", 00:12:56.096 "raid_level": "raid1", 00:12:56.096 "superblock": true, 00:12:56.096 "num_base_bdevs": 4, 00:12:56.096 "num_base_bdevs_discovered": 3, 00:12:56.096 "num_base_bdevs_operational": 3, 00:12:56.096 "process": { 00:12:56.096 "type": "rebuild", 00:12:56.096 "target": "spare", 00:12:56.096 "progress": { 00:12:56.096 "blocks": 49152, 00:12:56.096 "percent": 77 00:12:56.096 } 00:12:56.096 }, 00:12:56.096 "base_bdevs_list": [ 00:12:56.096 { 00:12:56.096 "name": "spare", 00:12:56.096 "uuid": "9c0b5814-5222-53f0-94d5-466c80df0d24", 00:12:56.096 "is_configured": true, 00:12:56.096 "data_offset": 2048, 00:12:56.096 "data_size": 63488 00:12:56.096 }, 00:12:56.096 { 00:12:56.096 "name": null, 00:12:56.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.096 "is_configured": false, 00:12:56.096 "data_offset": 0, 00:12:56.096 "data_size": 63488 00:12:56.096 }, 00:12:56.096 { 00:12:56.096 "name": "BaseBdev3", 00:12:56.096 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:56.096 "is_configured": true, 00:12:56.096 "data_offset": 2048, 00:12:56.096 "data_size": 63488 00:12:56.096 }, 00:12:56.096 { 00:12:56.096 "name": "BaseBdev4", 00:12:56.096 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:56.096 "is_configured": true, 00:12:56.096 "data_offset": 2048, 00:12:56.096 "data_size": 63488 00:12:56.096 } 00:12:56.096 ] 00:12:56.096 }' 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.096 16:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:56.665 [2024-12-07 16:38:55.455396] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:56.665 [2024-12-07 16:38:55.455493] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:56.665 [2024-12-07 16:38:55.455608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.232 16:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.232 16:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.232 16:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.232 16:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.232 16:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.232 16:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.232 16:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.232 16:38:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.232 16:38:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.232 16:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.232 16:38:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.232 "name": "raid_bdev1", 00:12:57.232 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:57.232 "strip_size_kb": 0, 00:12:57.232 "state": "online", 00:12:57.232 "raid_level": "raid1", 00:12:57.232 "superblock": true, 00:12:57.232 "num_base_bdevs": 4, 00:12:57.232 "num_base_bdevs_discovered": 3, 00:12:57.232 "num_base_bdevs_operational": 3, 00:12:57.232 "base_bdevs_list": [ 00:12:57.232 { 00:12:57.232 "name": "spare", 00:12:57.232 "uuid": "9c0b5814-5222-53f0-94d5-466c80df0d24", 00:12:57.232 "is_configured": true, 00:12:57.232 "data_offset": 2048, 00:12:57.232 "data_size": 63488 00:12:57.232 }, 00:12:57.232 { 00:12:57.232 "name": null, 00:12:57.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.232 "is_configured": false, 00:12:57.232 "data_offset": 0, 00:12:57.232 "data_size": 63488 00:12:57.232 }, 00:12:57.232 { 00:12:57.232 "name": "BaseBdev3", 00:12:57.232 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:57.232 "is_configured": true, 00:12:57.232 "data_offset": 2048, 00:12:57.232 "data_size": 63488 00:12:57.232 }, 00:12:57.232 { 00:12:57.232 "name": "BaseBdev4", 00:12:57.232 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:57.232 "is_configured": true, 00:12:57.232 "data_offset": 2048, 00:12:57.232 "data_size": 63488 00:12:57.232 } 00:12:57.232 ] 00:12:57.232 }' 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.232 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.492 "name": "raid_bdev1", 00:12:57.492 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:57.492 "strip_size_kb": 0, 00:12:57.492 "state": "online", 00:12:57.492 "raid_level": "raid1", 00:12:57.492 "superblock": true, 00:12:57.492 "num_base_bdevs": 4, 00:12:57.492 "num_base_bdevs_discovered": 3, 00:12:57.492 "num_base_bdevs_operational": 3, 00:12:57.492 "base_bdevs_list": [ 00:12:57.492 { 00:12:57.492 "name": "spare", 00:12:57.492 "uuid": "9c0b5814-5222-53f0-94d5-466c80df0d24", 00:12:57.492 "is_configured": true, 00:12:57.492 "data_offset": 2048, 00:12:57.492 "data_size": 63488 00:12:57.492 }, 00:12:57.492 { 00:12:57.492 "name": null, 00:12:57.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.492 "is_configured": false, 00:12:57.492 "data_offset": 0, 00:12:57.492 "data_size": 63488 00:12:57.492 }, 00:12:57.492 { 00:12:57.492 "name": "BaseBdev3", 00:12:57.492 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:57.492 "is_configured": true, 00:12:57.492 "data_offset": 2048, 00:12:57.492 "data_size": 63488 00:12:57.492 }, 00:12:57.492 { 00:12:57.492 "name": "BaseBdev4", 00:12:57.492 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:57.492 "is_configured": true, 00:12:57.492 "data_offset": 2048, 00:12:57.492 "data_size": 63488 00:12:57.492 } 00:12:57.492 ] 00:12:57.492 }' 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.492 "name": "raid_bdev1", 00:12:57.492 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:57.492 "strip_size_kb": 0, 00:12:57.492 "state": "online", 00:12:57.492 "raid_level": "raid1", 00:12:57.492 "superblock": true, 00:12:57.492 "num_base_bdevs": 4, 00:12:57.492 "num_base_bdevs_discovered": 3, 00:12:57.492 "num_base_bdevs_operational": 3, 00:12:57.492 "base_bdevs_list": [ 00:12:57.492 { 00:12:57.492 "name": "spare", 00:12:57.492 "uuid": "9c0b5814-5222-53f0-94d5-466c80df0d24", 00:12:57.492 "is_configured": true, 00:12:57.492 "data_offset": 2048, 00:12:57.492 "data_size": 63488 00:12:57.492 }, 00:12:57.492 { 00:12:57.492 "name": null, 00:12:57.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.492 "is_configured": false, 00:12:57.492 "data_offset": 0, 00:12:57.492 "data_size": 63488 00:12:57.492 }, 00:12:57.492 { 00:12:57.492 "name": "BaseBdev3", 00:12:57.492 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:57.492 "is_configured": true, 00:12:57.492 "data_offset": 2048, 00:12:57.492 "data_size": 63488 00:12:57.492 }, 00:12:57.492 { 00:12:57.492 "name": "BaseBdev4", 00:12:57.492 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:57.492 "is_configured": true, 00:12:57.492 "data_offset": 2048, 00:12:57.492 "data_size": 63488 00:12:57.492 } 00:12:57.492 ] 00:12:57.492 }' 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.492 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.063 [2024-12-07 16:38:56.664050] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.063 [2024-12-07 16:38:56.664084] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.063 [2024-12-07 16:38:56.664194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.063 [2024-12-07 16:38:56.664289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.063 [2024-12-07 16:38:56.664304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:58.063 /dev/nbd0 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.063 1+0 records in 00:12:58.063 1+0 records out 00:12:58.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323215 s, 12.7 MB/s 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:58.063 16:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:58.324 /dev/nbd1 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.324 1+0 records in 00:12:58.324 1+0 records out 00:12:58.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423673 s, 9.7 MB/s 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:58.324 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:58.583 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:58.583 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.583 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:58.583 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.583 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:58.583 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.584 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:58.584 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.843 [2024-12-07 16:38:57.727657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:58.843 [2024-12-07 16:38:57.727761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.843 [2024-12-07 16:38:57.727804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:58.843 [2024-12-07 16:38:57.727841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.843 [2024-12-07 16:38:57.730223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.843 [2024-12-07 16:38:57.730291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:58.843 [2024-12-07 16:38:57.730440] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:58.843 [2024-12-07 16:38:57.730513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:58.843 [2024-12-07 16:38:57.730693] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:58.843 [2024-12-07 16:38:57.730845] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:58.843 spare 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.843 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.103 [2024-12-07 16:38:57.830802] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:59.103 [2024-12-07 16:38:57.830945] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.103 [2024-12-07 16:38:57.831425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:59.103 [2024-12-07 16:38:57.831689] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:59.103 [2024-12-07 16:38:57.831732] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:59.103 [2024-12-07 16:38:57.831991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.103 "name": "raid_bdev1", 00:12:59.103 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:59.103 "strip_size_kb": 0, 00:12:59.103 "state": "online", 00:12:59.103 "raid_level": "raid1", 00:12:59.103 "superblock": true, 00:12:59.103 "num_base_bdevs": 4, 00:12:59.103 "num_base_bdevs_discovered": 3, 00:12:59.103 "num_base_bdevs_operational": 3, 00:12:59.103 "base_bdevs_list": [ 00:12:59.103 { 00:12:59.103 "name": "spare", 00:12:59.103 "uuid": "9c0b5814-5222-53f0-94d5-466c80df0d24", 00:12:59.103 "is_configured": true, 00:12:59.103 "data_offset": 2048, 00:12:59.103 "data_size": 63488 00:12:59.103 }, 00:12:59.103 { 00:12:59.103 "name": null, 00:12:59.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.103 "is_configured": false, 00:12:59.103 "data_offset": 2048, 00:12:59.103 "data_size": 63488 00:12:59.103 }, 00:12:59.103 { 00:12:59.103 "name": "BaseBdev3", 00:12:59.103 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:59.103 "is_configured": true, 00:12:59.103 "data_offset": 2048, 00:12:59.103 "data_size": 63488 00:12:59.103 }, 00:12:59.103 { 00:12:59.103 "name": "BaseBdev4", 00:12:59.103 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:59.103 "is_configured": true, 00:12:59.103 "data_offset": 2048, 00:12:59.103 "data_size": 63488 00:12:59.103 } 00:12:59.103 ] 00:12:59.103 }' 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.103 16:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.673 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.673 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.674 "name": "raid_bdev1", 00:12:59.674 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:59.674 "strip_size_kb": 0, 00:12:59.674 "state": "online", 00:12:59.674 "raid_level": "raid1", 00:12:59.674 "superblock": true, 00:12:59.674 "num_base_bdevs": 4, 00:12:59.674 "num_base_bdevs_discovered": 3, 00:12:59.674 "num_base_bdevs_operational": 3, 00:12:59.674 "base_bdevs_list": [ 00:12:59.674 { 00:12:59.674 "name": "spare", 00:12:59.674 "uuid": "9c0b5814-5222-53f0-94d5-466c80df0d24", 00:12:59.674 "is_configured": true, 00:12:59.674 "data_offset": 2048, 00:12:59.674 "data_size": 63488 00:12:59.674 }, 00:12:59.674 { 00:12:59.674 "name": null, 00:12:59.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.674 "is_configured": false, 00:12:59.674 "data_offset": 2048, 00:12:59.674 "data_size": 63488 00:12:59.674 }, 00:12:59.674 { 00:12:59.674 "name": "BaseBdev3", 00:12:59.674 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:59.674 "is_configured": true, 00:12:59.674 "data_offset": 2048, 00:12:59.674 "data_size": 63488 00:12:59.674 }, 00:12:59.674 { 00:12:59.674 "name": "BaseBdev4", 00:12:59.674 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:59.674 "is_configured": true, 00:12:59.674 "data_offset": 2048, 00:12:59.674 "data_size": 63488 00:12:59.674 } 00:12:59.674 ] 00:12:59.674 }' 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 [2024-12-07 16:38:58.503386] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.674 "name": "raid_bdev1", 00:12:59.674 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:12:59.674 "strip_size_kb": 0, 00:12:59.674 "state": "online", 00:12:59.674 "raid_level": "raid1", 00:12:59.674 "superblock": true, 00:12:59.674 "num_base_bdevs": 4, 00:12:59.674 "num_base_bdevs_discovered": 2, 00:12:59.674 "num_base_bdevs_operational": 2, 00:12:59.674 "base_bdevs_list": [ 00:12:59.674 { 00:12:59.674 "name": null, 00:12:59.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.674 "is_configured": false, 00:12:59.674 "data_offset": 0, 00:12:59.674 "data_size": 63488 00:12:59.674 }, 00:12:59.674 { 00:12:59.674 "name": null, 00:12:59.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.674 "is_configured": false, 00:12:59.674 "data_offset": 2048, 00:12:59.674 "data_size": 63488 00:12:59.674 }, 00:12:59.674 { 00:12:59.674 "name": "BaseBdev3", 00:12:59.674 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:12:59.674 "is_configured": true, 00:12:59.674 "data_offset": 2048, 00:12:59.674 "data_size": 63488 00:12:59.674 }, 00:12:59.674 { 00:12:59.674 "name": "BaseBdev4", 00:12:59.674 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:12:59.674 "is_configured": true, 00:12:59.674 "data_offset": 2048, 00:12:59.674 "data_size": 63488 00:12:59.674 } 00:12:59.674 ] 00:12:59.674 }' 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.674 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.278 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:00.278 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.278 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.278 [2024-12-07 16:38:58.958593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.278 [2024-12-07 16:38:58.958851] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:00.278 [2024-12-07 16:38:58.958921] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:00.278 [2024-12-07 16:38:58.958987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.278 [2024-12-07 16:38:58.964712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:00.278 16:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.278 16:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:00.278 [2024-12-07 16:38:58.966909] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.217 16:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.217 16:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.217 16:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.217 16:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.217 16:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.217 16:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.217 16:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.217 16:38:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.217 16:38:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.217 16:38:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.217 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.217 "name": "raid_bdev1", 00:13:01.217 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:13:01.217 "strip_size_kb": 0, 00:13:01.217 "state": "online", 00:13:01.217 "raid_level": "raid1", 00:13:01.217 "superblock": true, 00:13:01.217 "num_base_bdevs": 4, 00:13:01.217 "num_base_bdevs_discovered": 3, 00:13:01.217 "num_base_bdevs_operational": 3, 00:13:01.217 "process": { 00:13:01.217 "type": "rebuild", 00:13:01.217 "target": "spare", 00:13:01.217 "progress": { 00:13:01.217 "blocks": 20480, 00:13:01.217 "percent": 32 00:13:01.217 } 00:13:01.217 }, 00:13:01.217 "base_bdevs_list": [ 00:13:01.217 { 00:13:01.217 "name": "spare", 00:13:01.217 "uuid": "9c0b5814-5222-53f0-94d5-466c80df0d24", 00:13:01.217 "is_configured": true, 00:13:01.217 "data_offset": 2048, 00:13:01.217 "data_size": 63488 00:13:01.217 }, 00:13:01.217 { 00:13:01.217 "name": null, 00:13:01.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.217 "is_configured": false, 00:13:01.217 "data_offset": 2048, 00:13:01.217 "data_size": 63488 00:13:01.217 }, 00:13:01.217 { 00:13:01.217 "name": "BaseBdev3", 00:13:01.217 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:13:01.217 "is_configured": true, 00:13:01.217 "data_offset": 2048, 00:13:01.217 "data_size": 63488 00:13:01.217 }, 00:13:01.217 { 00:13:01.217 "name": "BaseBdev4", 00:13:01.217 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:13:01.217 "is_configured": true, 00:13:01.217 "data_offset": 2048, 00:13:01.217 "data_size": 63488 00:13:01.217 } 00:13:01.217 ] 00:13:01.217 }' 00:13:01.217 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.217 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.217 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.477 [2024-12-07 16:39:00.127474] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.477 [2024-12-07 16:39:00.174661] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:01.477 [2024-12-07 16:39:00.174773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.477 [2024-12-07 16:39:00.174811] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.477 [2024-12-07 16:39:00.174835] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.477 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.477 "name": "raid_bdev1", 00:13:01.477 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:13:01.477 "strip_size_kb": 0, 00:13:01.477 "state": "online", 00:13:01.477 "raid_level": "raid1", 00:13:01.478 "superblock": true, 00:13:01.478 "num_base_bdevs": 4, 00:13:01.478 "num_base_bdevs_discovered": 2, 00:13:01.478 "num_base_bdevs_operational": 2, 00:13:01.478 "base_bdevs_list": [ 00:13:01.478 { 00:13:01.478 "name": null, 00:13:01.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.478 "is_configured": false, 00:13:01.478 "data_offset": 0, 00:13:01.478 "data_size": 63488 00:13:01.478 }, 00:13:01.478 { 00:13:01.478 "name": null, 00:13:01.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.478 "is_configured": false, 00:13:01.478 "data_offset": 2048, 00:13:01.478 "data_size": 63488 00:13:01.478 }, 00:13:01.478 { 00:13:01.478 "name": "BaseBdev3", 00:13:01.478 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:13:01.478 "is_configured": true, 00:13:01.478 "data_offset": 2048, 00:13:01.478 "data_size": 63488 00:13:01.478 }, 00:13:01.478 { 00:13:01.478 "name": "BaseBdev4", 00:13:01.478 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:13:01.478 "is_configured": true, 00:13:01.478 "data_offset": 2048, 00:13:01.478 "data_size": 63488 00:13:01.478 } 00:13:01.478 ] 00:13:01.478 }' 00:13:01.478 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.478 16:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.046 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:02.046 16:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.046 16:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.046 [2024-12-07 16:39:00.644988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:02.046 [2024-12-07 16:39:00.645130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.046 [2024-12-07 16:39:00.645182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:02.046 [2024-12-07 16:39:00.645234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.046 [2024-12-07 16:39:00.645825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.046 [2024-12-07 16:39:00.645888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:02.046 [2024-12-07 16:39:00.646039] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:02.046 [2024-12-07 16:39:00.646093] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:02.046 [2024-12-07 16:39:00.646150] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:02.046 [2024-12-07 16:39:00.646201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:02.046 [2024-12-07 16:39:00.652025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:02.046 spare 00:13:02.046 16:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.046 16:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:02.046 [2024-12-07 16:39:00.654285] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.980 "name": "raid_bdev1", 00:13:02.980 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:13:02.980 "strip_size_kb": 0, 00:13:02.980 "state": "online", 00:13:02.980 "raid_level": "raid1", 00:13:02.980 "superblock": true, 00:13:02.980 "num_base_bdevs": 4, 00:13:02.980 "num_base_bdevs_discovered": 3, 00:13:02.980 "num_base_bdevs_operational": 3, 00:13:02.980 "process": { 00:13:02.980 "type": "rebuild", 00:13:02.980 "target": "spare", 00:13:02.980 "progress": { 00:13:02.980 "blocks": 20480, 00:13:02.980 "percent": 32 00:13:02.980 } 00:13:02.980 }, 00:13:02.980 "base_bdevs_list": [ 00:13:02.980 { 00:13:02.980 "name": "spare", 00:13:02.980 "uuid": "9c0b5814-5222-53f0-94d5-466c80df0d24", 00:13:02.980 "is_configured": true, 00:13:02.980 "data_offset": 2048, 00:13:02.980 "data_size": 63488 00:13:02.980 }, 00:13:02.980 { 00:13:02.980 "name": null, 00:13:02.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.980 "is_configured": false, 00:13:02.980 "data_offset": 2048, 00:13:02.980 "data_size": 63488 00:13:02.980 }, 00:13:02.980 { 00:13:02.980 "name": "BaseBdev3", 00:13:02.980 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:13:02.980 "is_configured": true, 00:13:02.980 "data_offset": 2048, 00:13:02.980 "data_size": 63488 00:13:02.980 }, 00:13:02.980 { 00:13:02.980 "name": "BaseBdev4", 00:13:02.980 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:13:02.980 "is_configured": true, 00:13:02.980 "data_offset": 2048, 00:13:02.980 "data_size": 63488 00:13:02.980 } 00:13:02.980 ] 00:13:02.980 }' 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.980 16:39:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.980 [2024-12-07 16:39:01.814371] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.980 [2024-12-07 16:39:01.862026] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:02.980 [2024-12-07 16:39:01.862088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.980 [2024-12-07 16:39:01.862108] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.980 [2024-12-07 16:39:01.862115] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:02.981 16:39:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.981 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:02.981 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.981 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.981 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.981 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.981 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.981 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.981 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.981 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.981 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.239 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.239 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.239 16:39:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.239 16:39:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.239 16:39:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.239 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.239 "name": "raid_bdev1", 00:13:03.239 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:13:03.239 "strip_size_kb": 0, 00:13:03.239 "state": "online", 00:13:03.239 "raid_level": "raid1", 00:13:03.239 "superblock": true, 00:13:03.239 "num_base_bdevs": 4, 00:13:03.239 "num_base_bdevs_discovered": 2, 00:13:03.239 "num_base_bdevs_operational": 2, 00:13:03.239 "base_bdevs_list": [ 00:13:03.239 { 00:13:03.239 "name": null, 00:13:03.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.239 "is_configured": false, 00:13:03.239 "data_offset": 0, 00:13:03.239 "data_size": 63488 00:13:03.239 }, 00:13:03.239 { 00:13:03.239 "name": null, 00:13:03.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.239 "is_configured": false, 00:13:03.239 "data_offset": 2048, 00:13:03.239 "data_size": 63488 00:13:03.239 }, 00:13:03.239 { 00:13:03.239 "name": "BaseBdev3", 00:13:03.239 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:13:03.239 "is_configured": true, 00:13:03.239 "data_offset": 2048, 00:13:03.239 "data_size": 63488 00:13:03.239 }, 00:13:03.239 { 00:13:03.239 "name": "BaseBdev4", 00:13:03.239 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:13:03.239 "is_configured": true, 00:13:03.240 "data_offset": 2048, 00:13:03.240 "data_size": 63488 00:13:03.240 } 00:13:03.240 ] 00:13:03.240 }' 00:13:03.240 16:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.240 16:39:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.499 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.499 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.499 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.499 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.499 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.499 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.499 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.499 16:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.499 16:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.499 16:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.499 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.499 "name": "raid_bdev1", 00:13:03.499 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:13:03.499 "strip_size_kb": 0, 00:13:03.499 "state": "online", 00:13:03.499 "raid_level": "raid1", 00:13:03.499 "superblock": true, 00:13:03.499 "num_base_bdevs": 4, 00:13:03.499 "num_base_bdevs_discovered": 2, 00:13:03.499 "num_base_bdevs_operational": 2, 00:13:03.499 "base_bdevs_list": [ 00:13:03.499 { 00:13:03.499 "name": null, 00:13:03.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.499 "is_configured": false, 00:13:03.499 "data_offset": 0, 00:13:03.499 "data_size": 63488 00:13:03.499 }, 00:13:03.499 { 00:13:03.499 "name": null, 00:13:03.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.499 "is_configured": false, 00:13:03.499 "data_offset": 2048, 00:13:03.499 "data_size": 63488 00:13:03.499 }, 00:13:03.499 { 00:13:03.499 "name": "BaseBdev3", 00:13:03.499 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:13:03.499 "is_configured": true, 00:13:03.499 "data_offset": 2048, 00:13:03.499 "data_size": 63488 00:13:03.499 }, 00:13:03.499 { 00:13:03.499 "name": "BaseBdev4", 00:13:03.499 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:13:03.499 "is_configured": true, 00:13:03.499 "data_offset": 2048, 00:13:03.499 "data_size": 63488 00:13:03.499 } 00:13:03.499 ] 00:13:03.499 }' 00:13:03.499 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.759 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:03.759 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.759 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:03.759 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:03.759 16:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.759 16:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.759 16:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.759 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:03.759 16:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.759 16:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.759 [2024-12-07 16:39:02.491985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:03.759 [2024-12-07 16:39:02.492090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.759 [2024-12-07 16:39:02.492136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:03.759 [2024-12-07 16:39:02.492166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.759 [2024-12-07 16:39:02.492707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.759 [2024-12-07 16:39:02.492760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:03.759 [2024-12-07 16:39:02.492875] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:03.759 [2024-12-07 16:39:02.492918] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:03.759 [2024-12-07 16:39:02.492962] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:03.759 [2024-12-07 16:39:02.493025] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:03.759 BaseBdev1 00:13:03.759 16:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.759 16:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.699 "name": "raid_bdev1", 00:13:04.699 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:13:04.699 "strip_size_kb": 0, 00:13:04.699 "state": "online", 00:13:04.699 "raid_level": "raid1", 00:13:04.699 "superblock": true, 00:13:04.699 "num_base_bdevs": 4, 00:13:04.699 "num_base_bdevs_discovered": 2, 00:13:04.699 "num_base_bdevs_operational": 2, 00:13:04.699 "base_bdevs_list": [ 00:13:04.699 { 00:13:04.699 "name": null, 00:13:04.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.699 "is_configured": false, 00:13:04.699 "data_offset": 0, 00:13:04.699 "data_size": 63488 00:13:04.699 }, 00:13:04.699 { 00:13:04.699 "name": null, 00:13:04.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.699 "is_configured": false, 00:13:04.699 "data_offset": 2048, 00:13:04.699 "data_size": 63488 00:13:04.699 }, 00:13:04.699 { 00:13:04.699 "name": "BaseBdev3", 00:13:04.699 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:13:04.699 "is_configured": true, 00:13:04.699 "data_offset": 2048, 00:13:04.699 "data_size": 63488 00:13:04.699 }, 00:13:04.699 { 00:13:04.699 "name": "BaseBdev4", 00:13:04.699 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:13:04.699 "is_configured": true, 00:13:04.699 "data_offset": 2048, 00:13:04.699 "data_size": 63488 00:13:04.699 } 00:13:04.699 ] 00:13:04.699 }' 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.699 16:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.270 "name": "raid_bdev1", 00:13:05.270 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:13:05.270 "strip_size_kb": 0, 00:13:05.270 "state": "online", 00:13:05.270 "raid_level": "raid1", 00:13:05.270 "superblock": true, 00:13:05.270 "num_base_bdevs": 4, 00:13:05.270 "num_base_bdevs_discovered": 2, 00:13:05.270 "num_base_bdevs_operational": 2, 00:13:05.270 "base_bdevs_list": [ 00:13:05.270 { 00:13:05.270 "name": null, 00:13:05.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.270 "is_configured": false, 00:13:05.270 "data_offset": 0, 00:13:05.270 "data_size": 63488 00:13:05.270 }, 00:13:05.270 { 00:13:05.270 "name": null, 00:13:05.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.270 "is_configured": false, 00:13:05.270 "data_offset": 2048, 00:13:05.270 "data_size": 63488 00:13:05.270 }, 00:13:05.270 { 00:13:05.270 "name": "BaseBdev3", 00:13:05.270 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:13:05.270 "is_configured": true, 00:13:05.270 "data_offset": 2048, 00:13:05.270 "data_size": 63488 00:13:05.270 }, 00:13:05.270 { 00:13:05.270 "name": "BaseBdev4", 00:13:05.270 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:13:05.270 "is_configured": true, 00:13:05.270 "data_offset": 2048, 00:13:05.270 "data_size": 63488 00:13:05.270 } 00:13:05.270 ] 00:13:05.270 }' 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.270 16:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.270 [2024-12-07 16:39:04.021499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.270 [2024-12-07 16:39:04.021757] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:05.270 [2024-12-07 16:39:04.021812] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:05.270 request: 00:13:05.270 { 00:13:05.270 "base_bdev": "BaseBdev1", 00:13:05.270 "raid_bdev": "raid_bdev1", 00:13:05.270 "method": "bdev_raid_add_base_bdev", 00:13:05.270 "req_id": 1 00:13:05.270 } 00:13:05.270 Got JSON-RPC error response 00:13:05.270 response: 00:13:05.270 { 00:13:05.270 "code": -22, 00:13:05.270 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:05.270 } 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:05.270 16:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.210 "name": "raid_bdev1", 00:13:06.210 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:13:06.210 "strip_size_kb": 0, 00:13:06.210 "state": "online", 00:13:06.210 "raid_level": "raid1", 00:13:06.210 "superblock": true, 00:13:06.210 "num_base_bdevs": 4, 00:13:06.210 "num_base_bdevs_discovered": 2, 00:13:06.210 "num_base_bdevs_operational": 2, 00:13:06.210 "base_bdevs_list": [ 00:13:06.210 { 00:13:06.210 "name": null, 00:13:06.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.210 "is_configured": false, 00:13:06.210 "data_offset": 0, 00:13:06.210 "data_size": 63488 00:13:06.210 }, 00:13:06.210 { 00:13:06.210 "name": null, 00:13:06.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.210 "is_configured": false, 00:13:06.210 "data_offset": 2048, 00:13:06.210 "data_size": 63488 00:13:06.210 }, 00:13:06.210 { 00:13:06.210 "name": "BaseBdev3", 00:13:06.210 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:13:06.210 "is_configured": true, 00:13:06.210 "data_offset": 2048, 00:13:06.210 "data_size": 63488 00:13:06.210 }, 00:13:06.210 { 00:13:06.210 "name": "BaseBdev4", 00:13:06.210 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:13:06.210 "is_configured": true, 00:13:06.210 "data_offset": 2048, 00:13:06.210 "data_size": 63488 00:13:06.210 } 00:13:06.210 ] 00:13:06.210 }' 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.210 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.779 "name": "raid_bdev1", 00:13:06.779 "uuid": "73df92db-3e2f-4ce3-8fd7-021ea666f4ef", 00:13:06.779 "strip_size_kb": 0, 00:13:06.779 "state": "online", 00:13:06.779 "raid_level": "raid1", 00:13:06.779 "superblock": true, 00:13:06.779 "num_base_bdevs": 4, 00:13:06.779 "num_base_bdevs_discovered": 2, 00:13:06.779 "num_base_bdevs_operational": 2, 00:13:06.779 "base_bdevs_list": [ 00:13:06.779 { 00:13:06.779 "name": null, 00:13:06.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.779 "is_configured": false, 00:13:06.779 "data_offset": 0, 00:13:06.779 "data_size": 63488 00:13:06.779 }, 00:13:06.779 { 00:13:06.779 "name": null, 00:13:06.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.779 "is_configured": false, 00:13:06.779 "data_offset": 2048, 00:13:06.779 "data_size": 63488 00:13:06.779 }, 00:13:06.779 { 00:13:06.779 "name": "BaseBdev3", 00:13:06.779 "uuid": "ba7971ba-6bb4-5c95-8330-e75f94a6c3e0", 00:13:06.779 "is_configured": true, 00:13:06.779 "data_offset": 2048, 00:13:06.779 "data_size": 63488 00:13:06.779 }, 00:13:06.779 { 00:13:06.779 "name": "BaseBdev4", 00:13:06.779 "uuid": "fd7edf59-b971-5b46-b12c-79287ec0d074", 00:13:06.779 "is_configured": true, 00:13:06.779 "data_offset": 2048, 00:13:06.779 "data_size": 63488 00:13:06.779 } 00:13:06.779 ] 00:13:06.779 }' 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88935 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88935 ']' 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88935 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88935 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:06.779 killing process with pid 88935 00:13:06.779 Received shutdown signal, test time was about 60.000000 seconds 00:13:06.779 00:13:06.779 Latency(us) 00:13:06.779 [2024-12-07T16:39:05.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.779 [2024-12-07T16:39:05.678Z] =================================================================================================================== 00:13:06.779 [2024-12-07T16:39:05.678Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88935' 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88935 00:13:06.779 [2024-12-07 16:39:05.616985] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.779 16:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88935 00:13:06.779 [2024-12-07 16:39:05.617139] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.779 [2024-12-07 16:39:05.617229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.779 [2024-12-07 16:39:05.617242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:07.038 [2024-12-07 16:39:05.716751] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:07.298 00:13:07.298 real 0m23.169s 00:13:07.298 user 0m28.399s 00:13:07.298 sys 0m3.896s 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.298 ************************************ 00:13:07.298 END TEST raid_rebuild_test_sb 00:13:07.298 ************************************ 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.298 16:39:06 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:07.298 16:39:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:07.298 16:39:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.298 16:39:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:07.298 ************************************ 00:13:07.298 START TEST raid_rebuild_test_io 00:13:07.298 ************************************ 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89665 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89665 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89665 ']' 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:07.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:07.298 16:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.558 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:07.558 Zero copy mechanism will not be used. 00:13:07.558 [2024-12-07 16:39:06.268648] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:07.558 [2024-12-07 16:39:06.268774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89665 ] 00:13:07.558 [2024-12-07 16:39:06.428396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.817 [2024-12-07 16:39:06.500832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.817 [2024-12-07 16:39:06.579249] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.817 [2024-12-07 16:39:06.579435] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.398 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:08.398 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:13:08.398 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.398 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:08.398 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.398 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.398 BaseBdev1_malloc 00:13:08.398 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.398 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:08.398 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.398 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.398 [2024-12-07 16:39:07.131402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:08.398 [2024-12-07 16:39:07.131520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.399 [2024-12-07 16:39:07.131569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:08.399 [2024-12-07 16:39:07.131608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.399 [2024-12-07 16:39:07.134126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.399 [2024-12-07 16:39:07.134196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:08.399 BaseBdev1 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.399 BaseBdev2_malloc 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.399 [2024-12-07 16:39:07.181327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:08.399 [2024-12-07 16:39:07.181444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.399 [2024-12-07 16:39:07.181488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:08.399 [2024-12-07 16:39:07.181521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.399 [2024-12-07 16:39:07.184166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.399 [2024-12-07 16:39:07.184238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:08.399 BaseBdev2 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.399 BaseBdev3_malloc 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.399 [2024-12-07 16:39:07.216823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:08.399 [2024-12-07 16:39:07.216914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.399 [2024-12-07 16:39:07.216959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:08.399 [2024-12-07 16:39:07.216988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.399 [2024-12-07 16:39:07.219450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.399 [2024-12-07 16:39:07.219522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:08.399 BaseBdev3 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.399 BaseBdev4_malloc 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.399 [2024-12-07 16:39:07.251785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:08.399 [2024-12-07 16:39:07.251881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.399 [2024-12-07 16:39:07.251926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:08.399 [2024-12-07 16:39:07.251955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.399 [2024-12-07 16:39:07.254338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.399 [2024-12-07 16:39:07.254412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:08.399 BaseBdev4 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.399 spare_malloc 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.399 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.680 spare_delay 00:13:08.680 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.680 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:08.680 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.680 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.680 [2024-12-07 16:39:07.298798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:08.680 [2024-12-07 16:39:07.298902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.680 [2024-12-07 16:39:07.298943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:08.680 [2024-12-07 16:39:07.298973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.680 [2024-12-07 16:39:07.301586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.680 [2024-12-07 16:39:07.301655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:08.680 spare 00:13:08.680 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.680 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:08.680 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.680 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.680 [2024-12-07 16:39:07.310875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:08.680 [2024-12-07 16:39:07.313098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.680 [2024-12-07 16:39:07.313210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:08.680 [2024-12-07 16:39:07.313289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:08.680 [2024-12-07 16:39:07.313411] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:08.680 [2024-12-07 16:39:07.313455] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:08.680 [2024-12-07 16:39:07.313755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:08.680 [2024-12-07 16:39:07.313967] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:08.680 [2024-12-07 16:39:07.313997] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:08.680 [2024-12-07 16:39:07.314118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.680 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.680 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:08.680 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.681 "name": "raid_bdev1", 00:13:08.681 "uuid": "19da6dbc-c607-41d7-a824-e01b79f84390", 00:13:08.681 "strip_size_kb": 0, 00:13:08.681 "state": "online", 00:13:08.681 "raid_level": "raid1", 00:13:08.681 "superblock": false, 00:13:08.681 "num_base_bdevs": 4, 00:13:08.681 "num_base_bdevs_discovered": 4, 00:13:08.681 "num_base_bdevs_operational": 4, 00:13:08.681 "base_bdevs_list": [ 00:13:08.681 { 00:13:08.681 "name": "BaseBdev1", 00:13:08.681 "uuid": "c48dfc1b-e732-5aea-b45f-335d49c42400", 00:13:08.681 "is_configured": true, 00:13:08.681 "data_offset": 0, 00:13:08.681 "data_size": 65536 00:13:08.681 }, 00:13:08.681 { 00:13:08.681 "name": "BaseBdev2", 00:13:08.681 "uuid": "14b0bdfe-d0b6-500f-a484-4a96ba07e79d", 00:13:08.681 "is_configured": true, 00:13:08.681 "data_offset": 0, 00:13:08.681 "data_size": 65536 00:13:08.681 }, 00:13:08.681 { 00:13:08.681 "name": "BaseBdev3", 00:13:08.681 "uuid": "875c1667-649c-5cea-b7cc-d38e81d4049d", 00:13:08.681 "is_configured": true, 00:13:08.681 "data_offset": 0, 00:13:08.681 "data_size": 65536 00:13:08.681 }, 00:13:08.681 { 00:13:08.681 "name": "BaseBdev4", 00:13:08.681 "uuid": "919f6d24-8c31-5562-a508-0e5e3242e4fc", 00:13:08.681 "is_configured": true, 00:13:08.681 "data_offset": 0, 00:13:08.681 "data_size": 65536 00:13:08.681 } 00:13:08.681 ] 00:13:08.681 }' 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.681 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.940 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:08.940 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:08.940 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.940 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.940 [2024-12-07 16:39:07.766479] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.940 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.941 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:08.941 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.941 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:08.941 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.941 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.941 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.200 [2024-12-07 16:39:07.849941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.200 "name": "raid_bdev1", 00:13:09.200 "uuid": "19da6dbc-c607-41d7-a824-e01b79f84390", 00:13:09.200 "strip_size_kb": 0, 00:13:09.200 "state": "online", 00:13:09.200 "raid_level": "raid1", 00:13:09.200 "superblock": false, 00:13:09.200 "num_base_bdevs": 4, 00:13:09.200 "num_base_bdevs_discovered": 3, 00:13:09.200 "num_base_bdevs_operational": 3, 00:13:09.200 "base_bdevs_list": [ 00:13:09.200 { 00:13:09.200 "name": null, 00:13:09.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.200 "is_configured": false, 00:13:09.200 "data_offset": 0, 00:13:09.200 "data_size": 65536 00:13:09.200 }, 00:13:09.200 { 00:13:09.200 "name": "BaseBdev2", 00:13:09.200 "uuid": "14b0bdfe-d0b6-500f-a484-4a96ba07e79d", 00:13:09.200 "is_configured": true, 00:13:09.200 "data_offset": 0, 00:13:09.200 "data_size": 65536 00:13:09.200 }, 00:13:09.200 { 00:13:09.200 "name": "BaseBdev3", 00:13:09.200 "uuid": "875c1667-649c-5cea-b7cc-d38e81d4049d", 00:13:09.200 "is_configured": true, 00:13:09.200 "data_offset": 0, 00:13:09.200 "data_size": 65536 00:13:09.200 }, 00:13:09.200 { 00:13:09.200 "name": "BaseBdev4", 00:13:09.200 "uuid": "919f6d24-8c31-5562-a508-0e5e3242e4fc", 00:13:09.200 "is_configured": true, 00:13:09.200 "data_offset": 0, 00:13:09.200 "data_size": 65536 00:13:09.200 } 00:13:09.200 ] 00:13:09.200 }' 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.200 16:39:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.200 [2024-12-07 16:39:07.941244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:09.200 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:09.200 Zero copy mechanism will not be used. 00:13:09.200 Running I/O for 60 seconds... 00:13:09.460 16:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:09.460 16:39:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.460 16:39:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.460 [2024-12-07 16:39:08.294933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.460 16:39:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.460 16:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:09.460 [2024-12-07 16:39:08.354735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:09.460 [2024-12-07 16:39:08.357186] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:09.719 [2024-12-07 16:39:08.498615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:09.979 [2024-12-07 16:39:08.735409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:10.239 148.00 IOPS, 444.00 MiB/s [2024-12-07T16:39:09.138Z] [2024-12-07 16:39:09.064582] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:10.499 [2024-12-07 16:39:09.286371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:10.499 [2024-12-07 16:39:09.287574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:10.499 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.499 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.499 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.499 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.499 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.499 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.499 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.499 16:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.499 16:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.499 16:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.499 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.499 "name": "raid_bdev1", 00:13:10.499 "uuid": "19da6dbc-c607-41d7-a824-e01b79f84390", 00:13:10.499 "strip_size_kb": 0, 00:13:10.499 "state": "online", 00:13:10.499 "raid_level": "raid1", 00:13:10.499 "superblock": false, 00:13:10.499 "num_base_bdevs": 4, 00:13:10.499 "num_base_bdevs_discovered": 4, 00:13:10.499 "num_base_bdevs_operational": 4, 00:13:10.499 "process": { 00:13:10.499 "type": "rebuild", 00:13:10.499 "target": "spare", 00:13:10.499 "progress": { 00:13:10.499 "blocks": 10240, 00:13:10.499 "percent": 15 00:13:10.499 } 00:13:10.499 }, 00:13:10.499 "base_bdevs_list": [ 00:13:10.499 { 00:13:10.499 "name": "spare", 00:13:10.499 "uuid": "61237ab0-bb0f-52a5-b1dd-64b8fa52e619", 00:13:10.499 "is_configured": true, 00:13:10.499 "data_offset": 0, 00:13:10.499 "data_size": 65536 00:13:10.499 }, 00:13:10.499 { 00:13:10.499 "name": "BaseBdev2", 00:13:10.499 "uuid": "14b0bdfe-d0b6-500f-a484-4a96ba07e79d", 00:13:10.499 "is_configured": true, 00:13:10.499 "data_offset": 0, 00:13:10.499 "data_size": 65536 00:13:10.499 }, 00:13:10.499 { 00:13:10.499 "name": "BaseBdev3", 00:13:10.499 "uuid": "875c1667-649c-5cea-b7cc-d38e81d4049d", 00:13:10.499 "is_configured": true, 00:13:10.499 "data_offset": 0, 00:13:10.499 "data_size": 65536 00:13:10.499 }, 00:13:10.499 { 00:13:10.499 "name": "BaseBdev4", 00:13:10.499 "uuid": "919f6d24-8c31-5562-a508-0e5e3242e4fc", 00:13:10.499 "is_configured": true, 00:13:10.499 "data_offset": 0, 00:13:10.499 "data_size": 65536 00:13:10.499 } 00:13:10.499 ] 00:13:10.499 }' 00:13:10.499 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.759 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.759 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.759 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.759 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:10.759 16:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.759 16:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.759 [2024-12-07 16:39:09.488542] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.759 [2024-12-07 16:39:09.607118] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:10.759 [2024-12-07 16:39:09.618263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.759 [2024-12-07 16:39:09.618421] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.759 [2024-12-07 16:39:09.618458] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:10.759 [2024-12-07 16:39:09.646284] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.019 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.020 "name": "raid_bdev1", 00:13:11.020 "uuid": "19da6dbc-c607-41d7-a824-e01b79f84390", 00:13:11.020 "strip_size_kb": 0, 00:13:11.020 "state": "online", 00:13:11.020 "raid_level": "raid1", 00:13:11.020 "superblock": false, 00:13:11.020 "num_base_bdevs": 4, 00:13:11.020 "num_base_bdevs_discovered": 3, 00:13:11.020 "num_base_bdevs_operational": 3, 00:13:11.020 "base_bdevs_list": [ 00:13:11.020 { 00:13:11.020 "name": null, 00:13:11.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.020 "is_configured": false, 00:13:11.020 "data_offset": 0, 00:13:11.020 "data_size": 65536 00:13:11.020 }, 00:13:11.020 { 00:13:11.020 "name": "BaseBdev2", 00:13:11.020 "uuid": "14b0bdfe-d0b6-500f-a484-4a96ba07e79d", 00:13:11.020 "is_configured": true, 00:13:11.020 "data_offset": 0, 00:13:11.020 "data_size": 65536 00:13:11.020 }, 00:13:11.020 { 00:13:11.020 "name": "BaseBdev3", 00:13:11.020 "uuid": "875c1667-649c-5cea-b7cc-d38e81d4049d", 00:13:11.020 "is_configured": true, 00:13:11.020 "data_offset": 0, 00:13:11.020 "data_size": 65536 00:13:11.020 }, 00:13:11.020 { 00:13:11.020 "name": "BaseBdev4", 00:13:11.020 "uuid": "919f6d24-8c31-5562-a508-0e5e3242e4fc", 00:13:11.020 "is_configured": true, 00:13:11.020 "data_offset": 0, 00:13:11.020 "data_size": 65536 00:13:11.020 } 00:13:11.020 ] 00:13:11.020 }' 00:13:11.020 16:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.020 16:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.279 132.50 IOPS, 397.50 MiB/s [2024-12-07T16:39:10.178Z] 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.279 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.279 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.279 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.280 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.280 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.280 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.280 16:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.280 16:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.280 16:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.280 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.280 "name": "raid_bdev1", 00:13:11.280 "uuid": "19da6dbc-c607-41d7-a824-e01b79f84390", 00:13:11.280 "strip_size_kb": 0, 00:13:11.280 "state": "online", 00:13:11.280 "raid_level": "raid1", 00:13:11.280 "superblock": false, 00:13:11.280 "num_base_bdevs": 4, 00:13:11.280 "num_base_bdevs_discovered": 3, 00:13:11.280 "num_base_bdevs_operational": 3, 00:13:11.280 "base_bdevs_list": [ 00:13:11.280 { 00:13:11.280 "name": null, 00:13:11.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.280 "is_configured": false, 00:13:11.280 "data_offset": 0, 00:13:11.280 "data_size": 65536 00:13:11.280 }, 00:13:11.280 { 00:13:11.280 "name": "BaseBdev2", 00:13:11.280 "uuid": "14b0bdfe-d0b6-500f-a484-4a96ba07e79d", 00:13:11.280 "is_configured": true, 00:13:11.280 "data_offset": 0, 00:13:11.280 "data_size": 65536 00:13:11.280 }, 00:13:11.280 { 00:13:11.280 "name": "BaseBdev3", 00:13:11.280 "uuid": "875c1667-649c-5cea-b7cc-d38e81d4049d", 00:13:11.280 "is_configured": true, 00:13:11.280 "data_offset": 0, 00:13:11.280 "data_size": 65536 00:13:11.280 }, 00:13:11.280 { 00:13:11.280 "name": "BaseBdev4", 00:13:11.280 "uuid": "919f6d24-8c31-5562-a508-0e5e3242e4fc", 00:13:11.280 "is_configured": true, 00:13:11.280 "data_offset": 0, 00:13:11.280 "data_size": 65536 00:13:11.280 } 00:13:11.280 ] 00:13:11.280 }' 00:13:11.280 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.540 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.540 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.540 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.540 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:11.540 16:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.540 16:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.540 [2024-12-07 16:39:10.241398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.540 16:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.540 16:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:11.540 [2024-12-07 16:39:10.300169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:11.540 [2024-12-07 16:39:10.302495] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:11.540 [2024-12-07 16:39:10.426437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:11.540 [2024-12-07 16:39:10.428593] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:11.799 [2024-12-07 16:39:10.648669] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:11.799 [2024-12-07 16:39:10.649146] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:12.059 [2024-12-07 16:39:10.895755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:12.059 [2024-12-07 16:39:10.897904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:12.317 132.00 IOPS, 396.00 MiB/s [2024-12-07T16:39:11.216Z] [2024-12-07 16:39:11.151587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.577 "name": "raid_bdev1", 00:13:12.577 "uuid": "19da6dbc-c607-41d7-a824-e01b79f84390", 00:13:12.577 "strip_size_kb": 0, 00:13:12.577 "state": "online", 00:13:12.577 "raid_level": "raid1", 00:13:12.577 "superblock": false, 00:13:12.577 "num_base_bdevs": 4, 00:13:12.577 "num_base_bdevs_discovered": 4, 00:13:12.577 "num_base_bdevs_operational": 4, 00:13:12.577 "process": { 00:13:12.577 "type": "rebuild", 00:13:12.577 "target": "spare", 00:13:12.577 "progress": { 00:13:12.577 "blocks": 10240, 00:13:12.577 "percent": 15 00:13:12.577 } 00:13:12.577 }, 00:13:12.577 "base_bdevs_list": [ 00:13:12.577 { 00:13:12.577 "name": "spare", 00:13:12.577 "uuid": "61237ab0-bb0f-52a5-b1dd-64b8fa52e619", 00:13:12.577 "is_configured": true, 00:13:12.577 "data_offset": 0, 00:13:12.577 "data_size": 65536 00:13:12.577 }, 00:13:12.577 { 00:13:12.577 "name": "BaseBdev2", 00:13:12.577 "uuid": "14b0bdfe-d0b6-500f-a484-4a96ba07e79d", 00:13:12.577 "is_configured": true, 00:13:12.577 "data_offset": 0, 00:13:12.577 "data_size": 65536 00:13:12.577 }, 00:13:12.577 { 00:13:12.577 "name": "BaseBdev3", 00:13:12.577 "uuid": "875c1667-649c-5cea-b7cc-d38e81d4049d", 00:13:12.577 "is_configured": true, 00:13:12.577 "data_offset": 0, 00:13:12.577 "data_size": 65536 00:13:12.577 }, 00:13:12.577 { 00:13:12.577 "name": "BaseBdev4", 00:13:12.577 "uuid": "919f6d24-8c31-5562-a508-0e5e3242e4fc", 00:13:12.577 "is_configured": true, 00:13:12.577 "data_offset": 0, 00:13:12.577 "data_size": 65536 00:13:12.577 } 00:13:12.577 ] 00:13:12.577 }' 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.577 16:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.577 [2024-12-07 16:39:11.425900] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:12.837 [2024-12-07 16:39:11.477609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:12.837 [2024-12-07 16:39:11.498883] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:13:12.837 [2024-12-07 16:39:11.498943] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.837 "name": "raid_bdev1", 00:13:12.837 "uuid": "19da6dbc-c607-41d7-a824-e01b79f84390", 00:13:12.837 "strip_size_kb": 0, 00:13:12.837 "state": "online", 00:13:12.837 "raid_level": "raid1", 00:13:12.837 "superblock": false, 00:13:12.837 "num_base_bdevs": 4, 00:13:12.837 "num_base_bdevs_discovered": 3, 00:13:12.837 "num_base_bdevs_operational": 3, 00:13:12.837 "process": { 00:13:12.837 "type": "rebuild", 00:13:12.837 "target": "spare", 00:13:12.837 "progress": { 00:13:12.837 "blocks": 14336, 00:13:12.837 "percent": 21 00:13:12.837 } 00:13:12.837 }, 00:13:12.837 "base_bdevs_list": [ 00:13:12.837 { 00:13:12.837 "name": "spare", 00:13:12.837 "uuid": "61237ab0-bb0f-52a5-b1dd-64b8fa52e619", 00:13:12.837 "is_configured": true, 00:13:12.837 "data_offset": 0, 00:13:12.837 "data_size": 65536 00:13:12.837 }, 00:13:12.837 { 00:13:12.837 "name": null, 00:13:12.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.837 "is_configured": false, 00:13:12.837 "data_offset": 0, 00:13:12.837 "data_size": 65536 00:13:12.837 }, 00:13:12.837 { 00:13:12.837 "name": "BaseBdev3", 00:13:12.837 "uuid": "875c1667-649c-5cea-b7cc-d38e81d4049d", 00:13:12.837 "is_configured": true, 00:13:12.837 "data_offset": 0, 00:13:12.837 "data_size": 65536 00:13:12.837 }, 00:13:12.837 { 00:13:12.837 "name": "BaseBdev4", 00:13:12.837 "uuid": "919f6d24-8c31-5562-a508-0e5e3242e4fc", 00:13:12.837 "is_configured": true, 00:13:12.837 "data_offset": 0, 00:13:12.837 "data_size": 65536 00:13:12.837 } 00:13:12.837 ] 00:13:12.837 }' 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.838 [2024-12-07 16:39:11.620935] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=403 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.838 "name": "raid_bdev1", 00:13:12.838 "uuid": "19da6dbc-c607-41d7-a824-e01b79f84390", 00:13:12.838 "strip_size_kb": 0, 00:13:12.838 "state": "online", 00:13:12.838 "raid_level": "raid1", 00:13:12.838 "superblock": false, 00:13:12.838 "num_base_bdevs": 4, 00:13:12.838 "num_base_bdevs_discovered": 3, 00:13:12.838 "num_base_bdevs_operational": 3, 00:13:12.838 "process": { 00:13:12.838 "type": "rebuild", 00:13:12.838 "target": "spare", 00:13:12.838 "progress": { 00:13:12.838 "blocks": 16384, 00:13:12.838 "percent": 25 00:13:12.838 } 00:13:12.838 }, 00:13:12.838 "base_bdevs_list": [ 00:13:12.838 { 00:13:12.838 "name": "spare", 00:13:12.838 "uuid": "61237ab0-bb0f-52a5-b1dd-64b8fa52e619", 00:13:12.838 "is_configured": true, 00:13:12.838 "data_offset": 0, 00:13:12.838 "data_size": 65536 00:13:12.838 }, 00:13:12.838 { 00:13:12.838 "name": null, 00:13:12.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.838 "is_configured": false, 00:13:12.838 "data_offset": 0, 00:13:12.838 "data_size": 65536 00:13:12.838 }, 00:13:12.838 { 00:13:12.838 "name": "BaseBdev3", 00:13:12.838 "uuid": "875c1667-649c-5cea-b7cc-d38e81d4049d", 00:13:12.838 "is_configured": true, 00:13:12.838 "data_offset": 0, 00:13:12.838 "data_size": 65536 00:13:12.838 }, 00:13:12.838 { 00:13:12.838 "name": "BaseBdev4", 00:13:12.838 "uuid": "919f6d24-8c31-5562-a508-0e5e3242e4fc", 00:13:12.838 "is_configured": true, 00:13:12.838 "data_offset": 0, 00:13:12.838 "data_size": 65536 00:13:12.838 } 00:13:12.838 ] 00:13:12.838 }' 00:13:12.838 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.097 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.097 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.097 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.097 16:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.097 [2024-12-07 16:39:11.865748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:14.075 118.50 IOPS, 355.50 MiB/s [2024-12-07T16:39:12.974Z] 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.075 "name": "raid_bdev1", 00:13:14.075 "uuid": "19da6dbc-c607-41d7-a824-e01b79f84390", 00:13:14.075 "strip_size_kb": 0, 00:13:14.075 "state": "online", 00:13:14.075 "raid_level": "raid1", 00:13:14.075 "superblock": false, 00:13:14.075 "num_base_bdevs": 4, 00:13:14.075 "num_base_bdevs_discovered": 3, 00:13:14.075 "num_base_bdevs_operational": 3, 00:13:14.075 "process": { 00:13:14.075 "type": "rebuild", 00:13:14.075 "target": "spare", 00:13:14.075 "progress": { 00:13:14.075 "blocks": 36864, 00:13:14.075 "percent": 56 00:13:14.075 } 00:13:14.075 }, 00:13:14.075 "base_bdevs_list": [ 00:13:14.075 { 00:13:14.075 "name": "spare", 00:13:14.075 "uuid": "61237ab0-bb0f-52a5-b1dd-64b8fa52e619", 00:13:14.075 "is_configured": true, 00:13:14.075 "data_offset": 0, 00:13:14.075 "data_size": 65536 00:13:14.075 }, 00:13:14.075 { 00:13:14.075 "name": null, 00:13:14.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.075 "is_configured": false, 00:13:14.075 "data_offset": 0, 00:13:14.075 "data_size": 65536 00:13:14.075 }, 00:13:14.075 { 00:13:14.075 "name": "BaseBdev3", 00:13:14.075 "uuid": "875c1667-649c-5cea-b7cc-d38e81d4049d", 00:13:14.075 "is_configured": true, 00:13:14.075 "data_offset": 0, 00:13:14.075 "data_size": 65536 00:13:14.075 }, 00:13:14.075 { 00:13:14.075 "name": "BaseBdev4", 00:13:14.075 "uuid": "919f6d24-8c31-5562-a508-0e5e3242e4fc", 00:13:14.075 "is_configured": true, 00:13:14.075 "data_offset": 0, 00:13:14.075 "data_size": 65536 00:13:14.075 } 00:13:14.075 ] 00:13:14.075 }' 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.075 104.40 IOPS, 313.20 MiB/s [2024-12-07T16:39:12.974Z] 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.075 16:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:14.335 [2024-12-07 16:39:13.215004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:14.905 [2024-12-07 16:39:13.565468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:15.166 92.17 IOPS, 276.50 MiB/s [2024-12-07T16:39:14.065Z] 16:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.167 16:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.167 16:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.167 16:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.167 16:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.167 16:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.167 16:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.167 16:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.167 16:39:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.167 16:39:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.167 16:39:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.167 16:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.167 "name": "raid_bdev1", 00:13:15.167 "uuid": "19da6dbc-c607-41d7-a824-e01b79f84390", 00:13:15.167 "strip_size_kb": 0, 00:13:15.167 "state": "online", 00:13:15.167 "raid_level": "raid1", 00:13:15.167 "superblock": false, 00:13:15.167 "num_base_bdevs": 4, 00:13:15.167 "num_base_bdevs_discovered": 3, 00:13:15.167 "num_base_bdevs_operational": 3, 00:13:15.167 "process": { 00:13:15.167 "type": "rebuild", 00:13:15.167 "target": "spare", 00:13:15.167 "progress": { 00:13:15.167 "blocks": 57344, 00:13:15.167 "percent": 87 00:13:15.167 } 00:13:15.167 }, 00:13:15.167 "base_bdevs_list": [ 00:13:15.167 { 00:13:15.167 "name": "spare", 00:13:15.167 "uuid": "61237ab0-bb0f-52a5-b1dd-64b8fa52e619", 00:13:15.167 "is_configured": true, 00:13:15.167 "data_offset": 0, 00:13:15.167 "data_size": 65536 00:13:15.167 }, 00:13:15.167 { 00:13:15.167 "name": null, 00:13:15.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.167 "is_configured": false, 00:13:15.167 "data_offset": 0, 00:13:15.167 "data_size": 65536 00:13:15.167 }, 00:13:15.167 { 00:13:15.167 "name": "BaseBdev3", 00:13:15.167 "uuid": "875c1667-649c-5cea-b7cc-d38e81d4049d", 00:13:15.167 "is_configured": true, 00:13:15.167 "data_offset": 0, 00:13:15.167 "data_size": 65536 00:13:15.167 }, 00:13:15.167 { 00:13:15.167 "name": "BaseBdev4", 00:13:15.167 "uuid": "919f6d24-8c31-5562-a508-0e5e3242e4fc", 00:13:15.167 "is_configured": true, 00:13:15.167 "data_offset": 0, 00:13:15.167 "data_size": 65536 00:13:15.167 } 00:13:15.167 ] 00:13:15.167 }' 00:13:15.167 16:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.167 16:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.167 16:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.429 16:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.429 16:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.699 [2024-12-07 16:39:14.333063] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:15.699 [2024-12-07 16:39:14.437898] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:15.699 [2024-12-07 16:39:14.443442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.270 83.43 IOPS, 250.29 MiB/s [2024-12-07T16:39:15.169Z] 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.270 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.270 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.270 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.270 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.270 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.270 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.270 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.270 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.270 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.270 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.529 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.529 "name": "raid_bdev1", 00:13:16.529 "uuid": "19da6dbc-c607-41d7-a824-e01b79f84390", 00:13:16.529 "strip_size_kb": 0, 00:13:16.529 "state": "online", 00:13:16.529 "raid_level": "raid1", 00:13:16.529 "superblock": false, 00:13:16.529 "num_base_bdevs": 4, 00:13:16.529 "num_base_bdevs_discovered": 3, 00:13:16.529 "num_base_bdevs_operational": 3, 00:13:16.529 "base_bdevs_list": [ 00:13:16.529 { 00:13:16.529 "name": "spare", 00:13:16.529 "uuid": "61237ab0-bb0f-52a5-b1dd-64b8fa52e619", 00:13:16.529 "is_configured": true, 00:13:16.529 "data_offset": 0, 00:13:16.529 "data_size": 65536 00:13:16.529 }, 00:13:16.529 { 00:13:16.529 "name": null, 00:13:16.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.529 "is_configured": false, 00:13:16.529 "data_offset": 0, 00:13:16.529 "data_size": 65536 00:13:16.529 }, 00:13:16.529 { 00:13:16.529 "name": "BaseBdev3", 00:13:16.529 "uuid": "875c1667-649c-5cea-b7cc-d38e81d4049d", 00:13:16.529 "is_configured": true, 00:13:16.529 "data_offset": 0, 00:13:16.529 "data_size": 65536 00:13:16.529 }, 00:13:16.529 { 00:13:16.529 "name": "BaseBdev4", 00:13:16.529 "uuid": "919f6d24-8c31-5562-a508-0e5e3242e4fc", 00:13:16.529 "is_configured": true, 00:13:16.529 "data_offset": 0, 00:13:16.529 "data_size": 65536 00:13:16.529 } 00:13:16.529 ] 00:13:16.529 }' 00:13:16.529 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.529 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:16.529 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.529 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:16.529 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:16.529 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.529 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.529 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.530 "name": "raid_bdev1", 00:13:16.530 "uuid": "19da6dbc-c607-41d7-a824-e01b79f84390", 00:13:16.530 "strip_size_kb": 0, 00:13:16.530 "state": "online", 00:13:16.530 "raid_level": "raid1", 00:13:16.530 "superblock": false, 00:13:16.530 "num_base_bdevs": 4, 00:13:16.530 "num_base_bdevs_discovered": 3, 00:13:16.530 "num_base_bdevs_operational": 3, 00:13:16.530 "base_bdevs_list": [ 00:13:16.530 { 00:13:16.530 "name": "spare", 00:13:16.530 "uuid": "61237ab0-bb0f-52a5-b1dd-64b8fa52e619", 00:13:16.530 "is_configured": true, 00:13:16.530 "data_offset": 0, 00:13:16.530 "data_size": 65536 00:13:16.530 }, 00:13:16.530 { 00:13:16.530 "name": null, 00:13:16.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.530 "is_configured": false, 00:13:16.530 "data_offset": 0, 00:13:16.530 "data_size": 65536 00:13:16.530 }, 00:13:16.530 { 00:13:16.530 "name": "BaseBdev3", 00:13:16.530 "uuid": "875c1667-649c-5cea-b7cc-d38e81d4049d", 00:13:16.530 "is_configured": true, 00:13:16.530 "data_offset": 0, 00:13:16.530 "data_size": 65536 00:13:16.530 }, 00:13:16.530 { 00:13:16.530 "name": "BaseBdev4", 00:13:16.530 "uuid": "919f6d24-8c31-5562-a508-0e5e3242e4fc", 00:13:16.530 "is_configured": true, 00:13:16.530 "data_offset": 0, 00:13:16.530 "data_size": 65536 00:13:16.530 } 00:13:16.530 ] 00:13:16.530 }' 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.530 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.789 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.789 "name": "raid_bdev1", 00:13:16.789 "uuid": "19da6dbc-c607-41d7-a824-e01b79f84390", 00:13:16.789 "strip_size_kb": 0, 00:13:16.789 "state": "online", 00:13:16.789 "raid_level": "raid1", 00:13:16.789 "superblock": false, 00:13:16.789 "num_base_bdevs": 4, 00:13:16.789 "num_base_bdevs_discovered": 3, 00:13:16.789 "num_base_bdevs_operational": 3, 00:13:16.789 "base_bdevs_list": [ 00:13:16.789 { 00:13:16.789 "name": "spare", 00:13:16.789 "uuid": "61237ab0-bb0f-52a5-b1dd-64b8fa52e619", 00:13:16.789 "is_configured": true, 00:13:16.789 "data_offset": 0, 00:13:16.789 "data_size": 65536 00:13:16.789 }, 00:13:16.789 { 00:13:16.789 "name": null, 00:13:16.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.789 "is_configured": false, 00:13:16.789 "data_offset": 0, 00:13:16.789 "data_size": 65536 00:13:16.789 }, 00:13:16.789 { 00:13:16.789 "name": "BaseBdev3", 00:13:16.789 "uuid": "875c1667-649c-5cea-b7cc-d38e81d4049d", 00:13:16.789 "is_configured": true, 00:13:16.789 "data_offset": 0, 00:13:16.789 "data_size": 65536 00:13:16.789 }, 00:13:16.789 { 00:13:16.789 "name": "BaseBdev4", 00:13:16.789 "uuid": "919f6d24-8c31-5562-a508-0e5e3242e4fc", 00:13:16.789 "is_configured": true, 00:13:16.789 "data_offset": 0, 00:13:16.789 "data_size": 65536 00:13:16.789 } 00:13:16.789 ] 00:13:16.790 }' 00:13:16.790 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.790 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.049 [2024-12-07 16:39:15.843860] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:17.049 [2024-12-07 16:39:15.843938] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.049 00:13:17.049 Latency(us) 00:13:17.049 [2024-12-07T16:39:15.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.049 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:17.049 raid_bdev1 : 7.93 77.45 232.36 0.00 0.00 18829.42 329.11 119052.30 00:13:17.049 [2024-12-07T16:39:15.948Z] =================================================================================================================== 00:13:17.049 [2024-12-07T16:39:15.948Z] Total : 77.45 232.36 0.00 0.00 18829.42 329.11 119052.30 00:13:17.049 [2024-12-07 16:39:15.859358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.049 [2024-12-07 16:39:15.859431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.049 [2024-12-07 16:39:15.859546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.049 [2024-12-07 16:39:15.859629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:17.049 { 00:13:17.049 "results": [ 00:13:17.049 { 00:13:17.049 "job": "raid_bdev1", 00:13:17.049 "core_mask": "0x1", 00:13:17.049 "workload": "randrw", 00:13:17.049 "percentage": 50, 00:13:17.049 "status": "finished", 00:13:17.049 "queue_depth": 2, 00:13:17.049 "io_size": 3145728, 00:13:17.049 "runtime": 7.927415, 00:13:17.049 "iops": 77.45273837688578, 00:13:17.049 "mibps": 232.35821513065733, 00:13:17.049 "io_failed": 0, 00:13:17.049 "io_timeout": 0, 00:13:17.049 "avg_latency_us": 18829.41885268054, 00:13:17.049 "min_latency_us": 329.1109170305677, 00:13:17.049 "max_latency_us": 119052.29694323144 00:13:17.049 } 00:13:17.049 ], 00:13:17.049 "core_count": 1 00:13:17.049 } 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:17.049 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.050 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:17.050 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.050 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.050 16:39:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:17.310 /dev/nbd0 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.310 1+0 records in 00:13:17.310 1+0 records out 00:13:17.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383845 s, 10.7 MB/s 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.310 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:17.570 /dev/nbd1 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.571 1+0 records in 00:13:17.571 1+0 records out 00:13:17.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427047 s, 9.6 MB/s 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.571 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.831 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:18.091 /dev/nbd1 00:13:18.091 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:18.091 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:18.091 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:18.091 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:18.091 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.091 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.091 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:18.092 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:18.092 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.092 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.092 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.092 1+0 records in 00:13:18.092 1+0 records out 00:13:18.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038998 s, 10.5 MB/s 00:13:18.092 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.092 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:18.092 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.092 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.092 16:39:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:18.092 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.092 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.092 16:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:18.352 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:18.352 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.352 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:18.352 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.352 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:18.352 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.352 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:18.352 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89665 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89665 ']' 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89665 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:18.612 16:39:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89665 00:13:19.008 16:39:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:19.008 16:39:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:19.008 16:39:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89665' 00:13:19.008 killing process with pid 89665 00:13:19.008 16:39:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89665 00:13:19.008 Received shutdown signal, test time was about 9.594107 seconds 00:13:19.008 00:13:19.008 Latency(us) 00:13:19.008 [2024-12-07T16:39:17.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.008 [2024-12-07T16:39:17.907Z] =================================================================================================================== 00:13:19.008 [2024-12-07T16:39:17.907Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:19.008 [2024-12-07 16:39:17.519252] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.008 16:39:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89665 00:13:19.008 [2024-12-07 16:39:17.604142] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.312 16:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:19.312 00:13:19.312 real 0m11.806s 00:13:19.312 user 0m15.096s 00:13:19.312 sys 0m2.000s 00:13:19.312 16:39:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.312 16:39:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.312 ************************************ 00:13:19.312 END TEST raid_rebuild_test_io 00:13:19.312 ************************************ 00:13:19.312 16:39:18 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:19.312 16:39:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:19.312 16:39:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.312 16:39:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.312 ************************************ 00:13:19.312 START TEST raid_rebuild_test_sb_io 00:13:19.312 ************************************ 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=90063 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 90063 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 90063 ']' 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.312 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.312 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:19.312 Zero copy mechanism will not be used. 00:13:19.312 [2024-12-07 16:39:18.159467] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:19.312 [2024-12-07 16:39:18.159618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90063 ] 00:13:19.572 [2024-12-07 16:39:18.323926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.572 [2024-12-07 16:39:18.394952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.832 [2024-12-07 16:39:18.471845] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.832 [2024-12-07 16:39:18.471890] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.091 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.091 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:20.091 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.091 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:20.091 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.091 16:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.351 BaseBdev1_malloc 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.351 [2024-12-07 16:39:19.010604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:20.351 [2024-12-07 16:39:19.010718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.351 [2024-12-07 16:39:19.010765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:20.351 [2024-12-07 16:39:19.010835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.351 [2024-12-07 16:39:19.013311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.351 [2024-12-07 16:39:19.013405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:20.351 BaseBdev1 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.351 BaseBdev2_malloc 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.351 [2024-12-07 16:39:19.054768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:20.351 [2024-12-07 16:39:19.054853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.351 [2024-12-07 16:39:19.054878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:20.351 [2024-12-07 16:39:19.054887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.351 [2024-12-07 16:39:19.057298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.351 [2024-12-07 16:39:19.057377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:20.351 BaseBdev2 00:13:20.351 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.352 BaseBdev3_malloc 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.352 [2024-12-07 16:39:19.089306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:20.352 [2024-12-07 16:39:19.089393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.352 [2024-12-07 16:39:19.089437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:20.352 [2024-12-07 16:39:19.089462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.352 [2024-12-07 16:39:19.091819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.352 [2024-12-07 16:39:19.091884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:20.352 BaseBdev3 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.352 BaseBdev4_malloc 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.352 [2024-12-07 16:39:19.123720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:20.352 [2024-12-07 16:39:19.123810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.352 [2024-12-07 16:39:19.123854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:20.352 [2024-12-07 16:39:19.123883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.352 [2024-12-07 16:39:19.126211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.352 [2024-12-07 16:39:19.126270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:20.352 BaseBdev4 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.352 spare_malloc 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.352 spare_delay 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.352 [2024-12-07 16:39:19.170143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:20.352 [2024-12-07 16:39:19.170226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.352 [2024-12-07 16:39:19.170263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:20.352 [2024-12-07 16:39:19.170290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.352 [2024-12-07 16:39:19.172729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.352 [2024-12-07 16:39:19.172792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:20.352 spare 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.352 [2024-12-07 16:39:19.182224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.352 [2024-12-07 16:39:19.184287] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.352 [2024-12-07 16:39:19.184402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.352 [2024-12-07 16:39:19.184452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:20.352 [2024-12-07 16:39:19.184618] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:20.352 [2024-12-07 16:39:19.184629] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:20.352 [2024-12-07 16:39:19.184870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:20.352 [2024-12-07 16:39:19.185009] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:20.352 [2024-12-07 16:39:19.185023] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:20.352 [2024-12-07 16:39:19.185154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.352 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.352 "name": "raid_bdev1", 00:13:20.352 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:20.352 "strip_size_kb": 0, 00:13:20.352 "state": "online", 00:13:20.352 "raid_level": "raid1", 00:13:20.352 "superblock": true, 00:13:20.352 "num_base_bdevs": 4, 00:13:20.352 "num_base_bdevs_discovered": 4, 00:13:20.352 "num_base_bdevs_operational": 4, 00:13:20.352 "base_bdevs_list": [ 00:13:20.352 { 00:13:20.352 "name": "BaseBdev1", 00:13:20.352 "uuid": "b964b274-52a0-564c-8c7e-8c194e8c9bc6", 00:13:20.352 "is_configured": true, 00:13:20.352 "data_offset": 2048, 00:13:20.352 "data_size": 63488 00:13:20.352 }, 00:13:20.352 { 00:13:20.352 "name": "BaseBdev2", 00:13:20.352 "uuid": "07d8440c-83bb-527c-8770-f31dd6735d1b", 00:13:20.352 "is_configured": true, 00:13:20.352 "data_offset": 2048, 00:13:20.352 "data_size": 63488 00:13:20.352 }, 00:13:20.352 { 00:13:20.352 "name": "BaseBdev3", 00:13:20.352 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:20.352 "is_configured": true, 00:13:20.352 "data_offset": 2048, 00:13:20.352 "data_size": 63488 00:13:20.352 }, 00:13:20.352 { 00:13:20.352 "name": "BaseBdev4", 00:13:20.353 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:20.353 "is_configured": true, 00:13:20.353 "data_offset": 2048, 00:13:20.353 "data_size": 63488 00:13:20.353 } 00:13:20.353 ] 00:13:20.353 }' 00:13:20.353 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.353 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.921 [2024-12-07 16:39:19.629747] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.921 [2024-12-07 16:39:19.693284] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.921 "name": "raid_bdev1", 00:13:20.921 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:20.921 "strip_size_kb": 0, 00:13:20.921 "state": "online", 00:13:20.921 "raid_level": "raid1", 00:13:20.921 "superblock": true, 00:13:20.921 "num_base_bdevs": 4, 00:13:20.921 "num_base_bdevs_discovered": 3, 00:13:20.921 "num_base_bdevs_operational": 3, 00:13:20.921 "base_bdevs_list": [ 00:13:20.921 { 00:13:20.921 "name": null, 00:13:20.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.921 "is_configured": false, 00:13:20.921 "data_offset": 0, 00:13:20.921 "data_size": 63488 00:13:20.921 }, 00:13:20.921 { 00:13:20.921 "name": "BaseBdev2", 00:13:20.921 "uuid": "07d8440c-83bb-527c-8770-f31dd6735d1b", 00:13:20.921 "is_configured": true, 00:13:20.921 "data_offset": 2048, 00:13:20.921 "data_size": 63488 00:13:20.921 }, 00:13:20.921 { 00:13:20.921 "name": "BaseBdev3", 00:13:20.921 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:20.921 "is_configured": true, 00:13:20.921 "data_offset": 2048, 00:13:20.921 "data_size": 63488 00:13:20.921 }, 00:13:20.921 { 00:13:20.921 "name": "BaseBdev4", 00:13:20.921 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:20.921 "is_configured": true, 00:13:20.921 "data_offset": 2048, 00:13:20.921 "data_size": 63488 00:13:20.921 } 00:13:20.921 ] 00:13:20.921 }' 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.921 16:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.921 [2024-12-07 16:39:19.780597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:20.921 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:20.921 Zero copy mechanism will not be used. 00:13:20.921 Running I/O for 60 seconds... 00:13:21.490 16:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:21.490 16:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.490 16:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.490 [2024-12-07 16:39:20.185393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:21.490 16:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.490 16:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:21.490 [2024-12-07 16:39:20.237608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:21.490 [2024-12-07 16:39:20.239990] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:21.490 [2024-12-07 16:39:20.367370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:21.490 [2024-12-07 16:39:20.369478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:21.749 [2024-12-07 16:39:20.604416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:21.749 [2024-12-07 16:39:20.604811] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:22.269 134.00 IOPS, 402.00 MiB/s [2024-12-07T16:39:21.168Z] [2024-12-07 16:39:20.933263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:22.269 [2024-12-07 16:39:20.935397] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:22.528 [2024-12-07 16:39:21.201552] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:22.528 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.528 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.528 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.528 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.528 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.528 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.528 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.528 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.528 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.528 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.528 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.528 "name": "raid_bdev1", 00:13:22.528 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:22.528 "strip_size_kb": 0, 00:13:22.528 "state": "online", 00:13:22.528 "raid_level": "raid1", 00:13:22.528 "superblock": true, 00:13:22.528 "num_base_bdevs": 4, 00:13:22.528 "num_base_bdevs_discovered": 4, 00:13:22.528 "num_base_bdevs_operational": 4, 00:13:22.528 "process": { 00:13:22.528 "type": "rebuild", 00:13:22.528 "target": "spare", 00:13:22.529 "progress": { 00:13:22.529 "blocks": 10240, 00:13:22.529 "percent": 16 00:13:22.529 } 00:13:22.529 }, 00:13:22.529 "base_bdevs_list": [ 00:13:22.529 { 00:13:22.529 "name": "spare", 00:13:22.529 "uuid": "2c2468cb-5b50-5d6f-96d3-93c7a6b39ee3", 00:13:22.529 "is_configured": true, 00:13:22.529 "data_offset": 2048, 00:13:22.529 "data_size": 63488 00:13:22.529 }, 00:13:22.529 { 00:13:22.529 "name": "BaseBdev2", 00:13:22.529 "uuid": "07d8440c-83bb-527c-8770-f31dd6735d1b", 00:13:22.529 "is_configured": true, 00:13:22.529 "data_offset": 2048, 00:13:22.529 "data_size": 63488 00:13:22.529 }, 00:13:22.529 { 00:13:22.529 "name": "BaseBdev3", 00:13:22.529 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:22.529 "is_configured": true, 00:13:22.529 "data_offset": 2048, 00:13:22.529 "data_size": 63488 00:13:22.529 }, 00:13:22.529 { 00:13:22.529 "name": "BaseBdev4", 00:13:22.529 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:22.529 "is_configured": true, 00:13:22.529 "data_offset": 2048, 00:13:22.529 "data_size": 63488 00:13:22.529 } 00:13:22.529 ] 00:13:22.529 }' 00:13:22.529 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.529 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.529 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.529 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.529 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:22.529 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.529 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.529 [2024-12-07 16:39:21.385148] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.788 [2024-12-07 16:39:21.501439] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:22.788 [2024-12-07 16:39:21.515087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.788 [2024-12-07 16:39:21.515138] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.788 [2024-12-07 16:39:21.515157] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:22.788 [2024-12-07 16:39:21.536563] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.788 "name": "raid_bdev1", 00:13:22.788 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:22.788 "strip_size_kb": 0, 00:13:22.788 "state": "online", 00:13:22.788 "raid_level": "raid1", 00:13:22.788 "superblock": true, 00:13:22.788 "num_base_bdevs": 4, 00:13:22.788 "num_base_bdevs_discovered": 3, 00:13:22.788 "num_base_bdevs_operational": 3, 00:13:22.788 "base_bdevs_list": [ 00:13:22.788 { 00:13:22.788 "name": null, 00:13:22.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.788 "is_configured": false, 00:13:22.788 "data_offset": 0, 00:13:22.788 "data_size": 63488 00:13:22.788 }, 00:13:22.788 { 00:13:22.788 "name": "BaseBdev2", 00:13:22.788 "uuid": "07d8440c-83bb-527c-8770-f31dd6735d1b", 00:13:22.788 "is_configured": true, 00:13:22.788 "data_offset": 2048, 00:13:22.788 "data_size": 63488 00:13:22.788 }, 00:13:22.788 { 00:13:22.788 "name": "BaseBdev3", 00:13:22.788 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:22.788 "is_configured": true, 00:13:22.788 "data_offset": 2048, 00:13:22.788 "data_size": 63488 00:13:22.788 }, 00:13:22.788 { 00:13:22.788 "name": "BaseBdev4", 00:13:22.788 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:22.788 "is_configured": true, 00:13:22.788 "data_offset": 2048, 00:13:22.788 "data_size": 63488 00:13:22.788 } 00:13:22.788 ] 00:13:22.788 }' 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.788 16:39:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.306 131.00 IOPS, 393.00 MiB/s [2024-12-07T16:39:22.205Z] 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.306 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.306 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.306 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.306 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.306 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.306 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.306 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.306 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.306 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.306 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.306 "name": "raid_bdev1", 00:13:23.306 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:23.306 "strip_size_kb": 0, 00:13:23.306 "state": "online", 00:13:23.306 "raid_level": "raid1", 00:13:23.306 "superblock": true, 00:13:23.307 "num_base_bdevs": 4, 00:13:23.307 "num_base_bdevs_discovered": 3, 00:13:23.307 "num_base_bdevs_operational": 3, 00:13:23.307 "base_bdevs_list": [ 00:13:23.307 { 00:13:23.307 "name": null, 00:13:23.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.307 "is_configured": false, 00:13:23.307 "data_offset": 0, 00:13:23.307 "data_size": 63488 00:13:23.307 }, 00:13:23.307 { 00:13:23.307 "name": "BaseBdev2", 00:13:23.307 "uuid": "07d8440c-83bb-527c-8770-f31dd6735d1b", 00:13:23.307 "is_configured": true, 00:13:23.307 "data_offset": 2048, 00:13:23.307 "data_size": 63488 00:13:23.307 }, 00:13:23.307 { 00:13:23.307 "name": "BaseBdev3", 00:13:23.307 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:23.307 "is_configured": true, 00:13:23.307 "data_offset": 2048, 00:13:23.307 "data_size": 63488 00:13:23.307 }, 00:13:23.307 { 00:13:23.307 "name": "BaseBdev4", 00:13:23.307 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:23.307 "is_configured": true, 00:13:23.307 "data_offset": 2048, 00:13:23.307 "data_size": 63488 00:13:23.307 } 00:13:23.307 ] 00:13:23.307 }' 00:13:23.307 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.307 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.307 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.307 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.307 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:23.307 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.307 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.307 [2024-12-07 16:39:22.139758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:23.307 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.307 16:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:23.307 [2024-12-07 16:39:22.182697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:23.307 [2024-12-07 16:39:22.185019] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.567 [2024-12-07 16:39:22.295233] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:23.567 [2024-12-07 16:39:22.295776] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:23.567 [2024-12-07 16:39:22.415882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:23.567 [2024-12-07 16:39:22.416291] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.136 [2024-12-07 16:39:22.756933] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:24.136 150.67 IOPS, 452.00 MiB/s [2024-12-07T16:39:23.035Z] [2024-12-07 16:39:22.923497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:24.396 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.396 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.396 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.396 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.396 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.396 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.396 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.396 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.396 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.396 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.396 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.396 "name": "raid_bdev1", 00:13:24.396 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:24.396 "strip_size_kb": 0, 00:13:24.396 "state": "online", 00:13:24.396 "raid_level": "raid1", 00:13:24.396 "superblock": true, 00:13:24.396 "num_base_bdevs": 4, 00:13:24.396 "num_base_bdevs_discovered": 4, 00:13:24.396 "num_base_bdevs_operational": 4, 00:13:24.396 "process": { 00:13:24.396 "type": "rebuild", 00:13:24.396 "target": "spare", 00:13:24.396 "progress": { 00:13:24.396 "blocks": 12288, 00:13:24.396 "percent": 19 00:13:24.396 } 00:13:24.396 }, 00:13:24.396 "base_bdevs_list": [ 00:13:24.396 { 00:13:24.396 "name": "spare", 00:13:24.396 "uuid": "2c2468cb-5b50-5d6f-96d3-93c7a6b39ee3", 00:13:24.396 "is_configured": true, 00:13:24.396 "data_offset": 2048, 00:13:24.396 "data_size": 63488 00:13:24.396 }, 00:13:24.396 { 00:13:24.396 "name": "BaseBdev2", 00:13:24.396 "uuid": "07d8440c-83bb-527c-8770-f31dd6735d1b", 00:13:24.396 "is_configured": true, 00:13:24.396 "data_offset": 2048, 00:13:24.396 "data_size": 63488 00:13:24.396 }, 00:13:24.396 { 00:13:24.396 "name": "BaseBdev3", 00:13:24.396 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:24.396 "is_configured": true, 00:13:24.396 "data_offset": 2048, 00:13:24.396 "data_size": 63488 00:13:24.396 }, 00:13:24.396 { 00:13:24.396 "name": "BaseBdev4", 00:13:24.396 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:24.396 "is_configured": true, 00:13:24.396 "data_offset": 2048, 00:13:24.396 "data_size": 63488 00:13:24.396 } 00:13:24.396 ] 00:13:24.396 }' 00:13:24.397 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.397 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.397 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.656 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.656 [2024-12-07 16:39:23.309706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 1 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:24.656 2288 offset_end: 18432 00:13:24.656 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:24.656 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:24.656 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:24.656 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:24.656 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:24.656 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:24.656 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.656 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.656 [2024-12-07 16:39:23.315861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:24.656 [2024-12-07 16:39:23.547553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:24.916 [2024-12-07 16:39:23.752828] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:13:24.916 [2024-12-07 16:39:23.752907] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:24.916 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.916 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:24.916 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:24.916 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.916 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.916 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.916 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.916 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.916 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.916 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.916 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.916 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.916 127.75 IOPS, 383.25 MiB/s [2024-12-07T16:39:23.815Z] 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.176 "name": "raid_bdev1", 00:13:25.176 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:25.176 "strip_size_kb": 0, 00:13:25.176 "state": "online", 00:13:25.176 "raid_level": "raid1", 00:13:25.176 "superblock": true, 00:13:25.176 "num_base_bdevs": 4, 00:13:25.176 "num_base_bdevs_discovered": 3, 00:13:25.176 "num_base_bdevs_operational": 3, 00:13:25.176 "process": { 00:13:25.176 "type": "rebuild", 00:13:25.176 "target": "spare", 00:13:25.176 "progress": { 00:13:25.176 "blocks": 16384, 00:13:25.176 "percent": 25 00:13:25.176 } 00:13:25.176 }, 00:13:25.176 "base_bdevs_list": [ 00:13:25.176 { 00:13:25.176 "name": "spare", 00:13:25.176 "uuid": "2c2468cb-5b50-5d6f-96d3-93c7a6b39ee3", 00:13:25.176 "is_configured": true, 00:13:25.176 "data_offset": 2048, 00:13:25.176 "data_size": 63488 00:13:25.176 }, 00:13:25.176 { 00:13:25.176 "name": null, 00:13:25.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.176 "is_configured": false, 00:13:25.176 "data_offset": 0, 00:13:25.176 "data_size": 63488 00:13:25.176 }, 00:13:25.176 { 00:13:25.176 "name": "BaseBdev3", 00:13:25.176 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:25.176 "is_configured": true, 00:13:25.176 "data_offset": 2048, 00:13:25.176 "data_size": 63488 00:13:25.176 }, 00:13:25.176 { 00:13:25.176 "name": "BaseBdev4", 00:13:25.176 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:25.176 "is_configured": true, 00:13:25.176 "data_offset": 2048, 00:13:25.176 "data_size": 63488 00:13:25.176 } 00:13:25.176 ] 00:13:25.176 }' 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=415 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.176 "name": "raid_bdev1", 00:13:25.176 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:25.176 "strip_size_kb": 0, 00:13:25.176 "state": "online", 00:13:25.176 "raid_level": "raid1", 00:13:25.176 "superblock": true, 00:13:25.176 "num_base_bdevs": 4, 00:13:25.176 "num_base_bdevs_discovered": 3, 00:13:25.176 "num_base_bdevs_operational": 3, 00:13:25.176 "process": { 00:13:25.176 "type": "rebuild", 00:13:25.176 "target": "spare", 00:13:25.176 "progress": { 00:13:25.176 "blocks": 18432, 00:13:25.176 "percent": 29 00:13:25.176 } 00:13:25.176 }, 00:13:25.176 "base_bdevs_list": [ 00:13:25.176 { 00:13:25.176 "name": "spare", 00:13:25.176 "uuid": "2c2468cb-5b50-5d6f-96d3-93c7a6b39ee3", 00:13:25.176 "is_configured": true, 00:13:25.176 "data_offset": 2048, 00:13:25.176 "data_size": 63488 00:13:25.176 }, 00:13:25.176 { 00:13:25.176 "name": null, 00:13:25.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.176 "is_configured": false, 00:13:25.176 "data_offset": 0, 00:13:25.176 "data_size": 63488 00:13:25.176 }, 00:13:25.176 { 00:13:25.176 "name": "BaseBdev3", 00:13:25.176 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:25.176 "is_configured": true, 00:13:25.176 "data_offset": 2048, 00:13:25.176 "data_size": 63488 00:13:25.176 }, 00:13:25.176 { 00:13:25.176 "name": "BaseBdev4", 00:13:25.176 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:25.176 "is_configured": true, 00:13:25.176 "data_offset": 2048, 00:13:25.176 "data_size": 63488 00:13:25.176 } 00:13:25.176 ] 00:13:25.176 }' 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.176 16:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.176 [2024-12-07 16:39:24.016767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:25.176 16:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.176 16:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.744 [2024-12-07 16:39:24.455023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:26.003 [2024-12-07 16:39:24.677882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:26.262 111.60 IOPS, 334.80 MiB/s [2024-12-07T16:39:25.161Z] [2024-12-07 16:39:25.025567] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:26.263 [2024-12-07 16:39:25.025904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.263 "name": "raid_bdev1", 00:13:26.263 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:26.263 "strip_size_kb": 0, 00:13:26.263 "state": "online", 00:13:26.263 "raid_level": "raid1", 00:13:26.263 "superblock": true, 00:13:26.263 "num_base_bdevs": 4, 00:13:26.263 "num_base_bdevs_discovered": 3, 00:13:26.263 "num_base_bdevs_operational": 3, 00:13:26.263 "process": { 00:13:26.263 "type": "rebuild", 00:13:26.263 "target": "spare", 00:13:26.263 "progress": { 00:13:26.263 "blocks": 34816, 00:13:26.263 "percent": 54 00:13:26.263 } 00:13:26.263 }, 00:13:26.263 "base_bdevs_list": [ 00:13:26.263 { 00:13:26.263 "name": "spare", 00:13:26.263 "uuid": "2c2468cb-5b50-5d6f-96d3-93c7a6b39ee3", 00:13:26.263 "is_configured": true, 00:13:26.263 "data_offset": 2048, 00:13:26.263 "data_size": 63488 00:13:26.263 }, 00:13:26.263 { 00:13:26.263 "name": null, 00:13:26.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.263 "is_configured": false, 00:13:26.263 "data_offset": 0, 00:13:26.263 "data_size": 63488 00:13:26.263 }, 00:13:26.263 { 00:13:26.263 "name": "BaseBdev3", 00:13:26.263 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:26.263 "is_configured": true, 00:13:26.263 "data_offset": 2048, 00:13:26.263 "data_size": 63488 00:13:26.263 }, 00:13:26.263 { 00:13:26.263 "name": "BaseBdev4", 00:13:26.263 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:26.263 "is_configured": true, 00:13:26.263 "data_offset": 2048, 00:13:26.263 "data_size": 63488 00:13:26.263 } 00:13:26.263 ] 00:13:26.263 }' 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.263 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.521 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.521 16:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.347 101.17 IOPS, 303.50 MiB/s [2024-12-07T16:39:26.246Z] [2024-12-07 16:39:26.046440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:27.347 [2024-12-07 16:39:26.159019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:27.347 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.347 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.347 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.347 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.347 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.347 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.347 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.347 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.347 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.347 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.347 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.606 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.606 "name": "raid_bdev1", 00:13:27.606 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:27.606 "strip_size_kb": 0, 00:13:27.607 "state": "online", 00:13:27.607 "raid_level": "raid1", 00:13:27.607 "superblock": true, 00:13:27.607 "num_base_bdevs": 4, 00:13:27.607 "num_base_bdevs_discovered": 3, 00:13:27.607 "num_base_bdevs_operational": 3, 00:13:27.607 "process": { 00:13:27.607 "type": "rebuild", 00:13:27.607 "target": "spare", 00:13:27.607 "progress": { 00:13:27.607 "blocks": 53248, 00:13:27.607 "percent": 83 00:13:27.607 } 00:13:27.607 }, 00:13:27.607 "base_bdevs_list": [ 00:13:27.607 { 00:13:27.607 "name": "spare", 00:13:27.607 "uuid": "2c2468cb-5b50-5d6f-96d3-93c7a6b39ee3", 00:13:27.607 "is_configured": true, 00:13:27.607 "data_offset": 2048, 00:13:27.607 "data_size": 63488 00:13:27.607 }, 00:13:27.607 { 00:13:27.607 "name": null, 00:13:27.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.607 "is_configured": false, 00:13:27.607 "data_offset": 0, 00:13:27.607 "data_size": 63488 00:13:27.607 }, 00:13:27.607 { 00:13:27.607 "name": "BaseBdev3", 00:13:27.607 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:27.607 "is_configured": true, 00:13:27.607 "data_offset": 2048, 00:13:27.607 "data_size": 63488 00:13:27.607 }, 00:13:27.607 { 00:13:27.607 "name": "BaseBdev4", 00:13:27.607 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:27.607 "is_configured": true, 00:13:27.607 "data_offset": 2048, 00:13:27.607 "data_size": 63488 00:13:27.607 } 00:13:27.607 ] 00:13:27.607 }' 00:13:27.607 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.607 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.607 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.607 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.607 16:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.607 [2024-12-07 16:39:26.486086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:27.607 [2024-12-07 16:39:26.487484] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:28.175 92.14 IOPS, 276.43 MiB/s [2024-12-07T16:39:27.074Z] [2024-12-07 16:39:26.942415] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:28.175 [2024-12-07 16:39:27.042231] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:28.175 [2024-12-07 16:39:27.045691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.744 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.744 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.744 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.744 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.744 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.744 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.744 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.744 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.745 "name": "raid_bdev1", 00:13:28.745 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:28.745 "strip_size_kb": 0, 00:13:28.745 "state": "online", 00:13:28.745 "raid_level": "raid1", 00:13:28.745 "superblock": true, 00:13:28.745 "num_base_bdevs": 4, 00:13:28.745 "num_base_bdevs_discovered": 3, 00:13:28.745 "num_base_bdevs_operational": 3, 00:13:28.745 "base_bdevs_list": [ 00:13:28.745 { 00:13:28.745 "name": "spare", 00:13:28.745 "uuid": "2c2468cb-5b50-5d6f-96d3-93c7a6b39ee3", 00:13:28.745 "is_configured": true, 00:13:28.745 "data_offset": 2048, 00:13:28.745 "data_size": 63488 00:13:28.745 }, 00:13:28.745 { 00:13:28.745 "name": null, 00:13:28.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.745 "is_configured": false, 00:13:28.745 "data_offset": 0, 00:13:28.745 "data_size": 63488 00:13:28.745 }, 00:13:28.745 { 00:13:28.745 "name": "BaseBdev3", 00:13:28.745 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:28.745 "is_configured": true, 00:13:28.745 "data_offset": 2048, 00:13:28.745 "data_size": 63488 00:13:28.745 }, 00:13:28.745 { 00:13:28.745 "name": "BaseBdev4", 00:13:28.745 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:28.745 "is_configured": true, 00:13:28.745 "data_offset": 2048, 00:13:28.745 "data_size": 63488 00:13:28.745 } 00:13:28.745 ] 00:13:28.745 }' 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.745 "name": "raid_bdev1", 00:13:28.745 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:28.745 "strip_size_kb": 0, 00:13:28.745 "state": "online", 00:13:28.745 "raid_level": "raid1", 00:13:28.745 "superblock": true, 00:13:28.745 "num_base_bdevs": 4, 00:13:28.745 "num_base_bdevs_discovered": 3, 00:13:28.745 "num_base_bdevs_operational": 3, 00:13:28.745 "base_bdevs_list": [ 00:13:28.745 { 00:13:28.745 "name": "spare", 00:13:28.745 "uuid": "2c2468cb-5b50-5d6f-96d3-93c7a6b39ee3", 00:13:28.745 "is_configured": true, 00:13:28.745 "data_offset": 2048, 00:13:28.745 "data_size": 63488 00:13:28.745 }, 00:13:28.745 { 00:13:28.745 "name": null, 00:13:28.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.745 "is_configured": false, 00:13:28.745 "data_offset": 0, 00:13:28.745 "data_size": 63488 00:13:28.745 }, 00:13:28.745 { 00:13:28.745 "name": "BaseBdev3", 00:13:28.745 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:28.745 "is_configured": true, 00:13:28.745 "data_offset": 2048, 00:13:28.745 "data_size": 63488 00:13:28.745 }, 00:13:28.745 { 00:13:28.745 "name": "BaseBdev4", 00:13:28.745 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:28.745 "is_configured": true, 00:13:28.745 "data_offset": 2048, 00:13:28.745 "data_size": 63488 00:13:28.745 } 00:13:28.745 ] 00:13:28.745 }' 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.745 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.004 "name": "raid_bdev1", 00:13:29.004 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:29.004 "strip_size_kb": 0, 00:13:29.004 "state": "online", 00:13:29.004 "raid_level": "raid1", 00:13:29.004 "superblock": true, 00:13:29.004 "num_base_bdevs": 4, 00:13:29.004 "num_base_bdevs_discovered": 3, 00:13:29.004 "num_base_bdevs_operational": 3, 00:13:29.004 "base_bdevs_list": [ 00:13:29.004 { 00:13:29.004 "name": "spare", 00:13:29.004 "uuid": "2c2468cb-5b50-5d6f-96d3-93c7a6b39ee3", 00:13:29.004 "is_configured": true, 00:13:29.004 "data_offset": 2048, 00:13:29.004 "data_size": 63488 00:13:29.004 }, 00:13:29.004 { 00:13:29.004 "name": null, 00:13:29.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.004 "is_configured": false, 00:13:29.004 "data_offset": 0, 00:13:29.004 "data_size": 63488 00:13:29.004 }, 00:13:29.004 { 00:13:29.004 "name": "BaseBdev3", 00:13:29.004 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:29.004 "is_configured": true, 00:13:29.004 "data_offset": 2048, 00:13:29.004 "data_size": 63488 00:13:29.004 }, 00:13:29.004 { 00:13:29.004 "name": "BaseBdev4", 00:13:29.004 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:29.004 "is_configured": true, 00:13:29.004 "data_offset": 2048, 00:13:29.004 "data_size": 63488 00:13:29.004 } 00:13:29.004 ] 00:13:29.004 }' 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.004 16:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.263 85.50 IOPS, 256.50 MiB/s [2024-12-07T16:39:28.162Z] 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:29.263 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.263 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.263 [2024-12-07 16:39:28.134456] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:29.263 [2024-12-07 16:39:28.134528] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.522 00:13:29.522 Latency(us) 00:13:29.522 [2024-12-07T16:39:28.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.522 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:29.522 raid_bdev1 : 8.46 82.51 247.53 0.00 0.00 17948.67 302.28 119968.08 00:13:29.522 [2024-12-07T16:39:28.421Z] =================================================================================================================== 00:13:29.522 [2024-12-07T16:39:28.421Z] Total : 82.51 247.53 0.00 0.00 17948.67 302.28 119968.08 00:13:29.522 [2024-12-07 16:39:28.229794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.522 [2024-12-07 16:39:28.229873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.522 [2024-12-07 16:39:28.230037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.522 [2024-12-07 16:39:28.230088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:29.522 { 00:13:29.522 "results": [ 00:13:29.522 { 00:13:29.522 "job": "raid_bdev1", 00:13:29.522 "core_mask": "0x1", 00:13:29.522 "workload": "randrw", 00:13:29.522 "percentage": 50, 00:13:29.522 "status": "finished", 00:13:29.522 "queue_depth": 2, 00:13:29.522 "io_size": 3145728, 00:13:29.522 "runtime": 8.459475, 00:13:29.522 "iops": 82.51103053085446, 00:13:29.522 "mibps": 247.53309159256338, 00:13:29.522 "io_failed": 0, 00:13:29.522 "io_timeout": 0, 00:13:29.522 "avg_latency_us": 17948.668309956083, 00:13:29.522 "min_latency_us": 302.2812227074236, 00:13:29.522 "max_latency_us": 119968.08384279476 00:13:29.522 } 00:13:29.522 ], 00:13:29.522 "core_count": 1 00:13:29.522 } 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.522 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:29.782 /dev/nbd0 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.782 1+0 records in 00:13:29.782 1+0 records out 00:13:29.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376257 s, 10.9 MB/s 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.782 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:30.043 /dev/nbd1 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.043 1+0 records in 00:13:30.043 1+0 records out 00:13:30.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257579 s, 15.9 MB/s 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.043 16:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.303 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:30.563 /dev/nbd1 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.563 1+0 records in 00:13:30.563 1+0 records out 00:13:30.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449255 s, 9.1 MB/s 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.563 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.822 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.081 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.082 [2024-12-07 16:39:29.816515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:31.082 [2024-12-07 16:39:29.816618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.082 [2024-12-07 16:39:29.816672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:31.082 [2024-12-07 16:39:29.816704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.082 [2024-12-07 16:39:29.819261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.082 [2024-12-07 16:39:29.819364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:31.082 [2024-12-07 16:39:29.819497] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:31.082 [2024-12-07 16:39:29.819576] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.082 [2024-12-07 16:39:29.819757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.082 [2024-12-07 16:39:29.819911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:31.082 spare 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.082 [2024-12-07 16:39:29.919849] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:31.082 [2024-12-07 16:39:29.919908] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:31.082 [2024-12-07 16:39:29.920221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:13:31.082 [2024-12-07 16:39:29.920426] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:31.082 [2024-12-07 16:39:29.920471] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:31.082 [2024-12-07 16:39:29.920697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.082 "name": "raid_bdev1", 00:13:31.082 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:31.082 "strip_size_kb": 0, 00:13:31.082 "state": "online", 00:13:31.082 "raid_level": "raid1", 00:13:31.082 "superblock": true, 00:13:31.082 "num_base_bdevs": 4, 00:13:31.082 "num_base_bdevs_discovered": 3, 00:13:31.082 "num_base_bdevs_operational": 3, 00:13:31.082 "base_bdevs_list": [ 00:13:31.082 { 00:13:31.082 "name": "spare", 00:13:31.082 "uuid": "2c2468cb-5b50-5d6f-96d3-93c7a6b39ee3", 00:13:31.082 "is_configured": true, 00:13:31.082 "data_offset": 2048, 00:13:31.082 "data_size": 63488 00:13:31.082 }, 00:13:31.082 { 00:13:31.082 "name": null, 00:13:31.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.082 "is_configured": false, 00:13:31.082 "data_offset": 2048, 00:13:31.082 "data_size": 63488 00:13:31.082 }, 00:13:31.082 { 00:13:31.082 "name": "BaseBdev3", 00:13:31.082 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:31.082 "is_configured": true, 00:13:31.082 "data_offset": 2048, 00:13:31.082 "data_size": 63488 00:13:31.082 }, 00:13:31.082 { 00:13:31.082 "name": "BaseBdev4", 00:13:31.082 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:31.082 "is_configured": true, 00:13:31.082 "data_offset": 2048, 00:13:31.082 "data_size": 63488 00:13:31.082 } 00:13:31.082 ] 00:13:31.082 }' 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.082 16:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.649 "name": "raid_bdev1", 00:13:31.649 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:31.649 "strip_size_kb": 0, 00:13:31.649 "state": "online", 00:13:31.649 "raid_level": "raid1", 00:13:31.649 "superblock": true, 00:13:31.649 "num_base_bdevs": 4, 00:13:31.649 "num_base_bdevs_discovered": 3, 00:13:31.649 "num_base_bdevs_operational": 3, 00:13:31.649 "base_bdevs_list": [ 00:13:31.649 { 00:13:31.649 "name": "spare", 00:13:31.649 "uuid": "2c2468cb-5b50-5d6f-96d3-93c7a6b39ee3", 00:13:31.649 "is_configured": true, 00:13:31.649 "data_offset": 2048, 00:13:31.649 "data_size": 63488 00:13:31.649 }, 00:13:31.649 { 00:13:31.649 "name": null, 00:13:31.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.649 "is_configured": false, 00:13:31.649 "data_offset": 2048, 00:13:31.649 "data_size": 63488 00:13:31.649 }, 00:13:31.649 { 00:13:31.649 "name": "BaseBdev3", 00:13:31.649 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:31.649 "is_configured": true, 00:13:31.649 "data_offset": 2048, 00:13:31.649 "data_size": 63488 00:13:31.649 }, 00:13:31.649 { 00:13:31.649 "name": "BaseBdev4", 00:13:31.649 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:31.649 "is_configured": true, 00:13:31.649 "data_offset": 2048, 00:13:31.649 "data_size": 63488 00:13:31.649 } 00:13:31.649 ] 00:13:31.649 }' 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.649 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.908 [2024-12-07 16:39:30.583613] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.908 "name": "raid_bdev1", 00:13:31.908 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:31.908 "strip_size_kb": 0, 00:13:31.908 "state": "online", 00:13:31.908 "raid_level": "raid1", 00:13:31.908 "superblock": true, 00:13:31.908 "num_base_bdevs": 4, 00:13:31.908 "num_base_bdevs_discovered": 2, 00:13:31.908 "num_base_bdevs_operational": 2, 00:13:31.908 "base_bdevs_list": [ 00:13:31.908 { 00:13:31.908 "name": null, 00:13:31.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.908 "is_configured": false, 00:13:31.908 "data_offset": 0, 00:13:31.908 "data_size": 63488 00:13:31.908 }, 00:13:31.908 { 00:13:31.908 "name": null, 00:13:31.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.908 "is_configured": false, 00:13:31.908 "data_offset": 2048, 00:13:31.908 "data_size": 63488 00:13:31.908 }, 00:13:31.908 { 00:13:31.908 "name": "BaseBdev3", 00:13:31.908 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:31.908 "is_configured": true, 00:13:31.908 "data_offset": 2048, 00:13:31.908 "data_size": 63488 00:13:31.908 }, 00:13:31.908 { 00:13:31.908 "name": "BaseBdev4", 00:13:31.908 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:31.908 "is_configured": true, 00:13:31.908 "data_offset": 2048, 00:13:31.908 "data_size": 63488 00:13:31.908 } 00:13:31.908 ] 00:13:31.908 }' 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.908 16:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.476 16:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:32.476 16:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.476 16:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.476 [2024-12-07 16:39:31.086897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.476 [2024-12-07 16:39:31.087163] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:32.476 [2024-12-07 16:39:31.087220] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:32.476 [2024-12-07 16:39:31.087291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.476 [2024-12-07 16:39:31.093804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:13:32.476 16:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.476 16:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:32.476 [2024-12-07 16:39:31.096037] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.414 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.414 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.414 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.414 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.414 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.414 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.414 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.414 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.414 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.414 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.414 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.414 "name": "raid_bdev1", 00:13:33.414 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:33.414 "strip_size_kb": 0, 00:13:33.415 "state": "online", 00:13:33.415 "raid_level": "raid1", 00:13:33.415 "superblock": true, 00:13:33.415 "num_base_bdevs": 4, 00:13:33.415 "num_base_bdevs_discovered": 3, 00:13:33.415 "num_base_bdevs_operational": 3, 00:13:33.415 "process": { 00:13:33.415 "type": "rebuild", 00:13:33.415 "target": "spare", 00:13:33.415 "progress": { 00:13:33.415 "blocks": 20480, 00:13:33.415 "percent": 32 00:13:33.415 } 00:13:33.415 }, 00:13:33.415 "base_bdevs_list": [ 00:13:33.415 { 00:13:33.415 "name": "spare", 00:13:33.415 "uuid": "2c2468cb-5b50-5d6f-96d3-93c7a6b39ee3", 00:13:33.415 "is_configured": true, 00:13:33.415 "data_offset": 2048, 00:13:33.415 "data_size": 63488 00:13:33.415 }, 00:13:33.415 { 00:13:33.415 "name": null, 00:13:33.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.415 "is_configured": false, 00:13:33.415 "data_offset": 2048, 00:13:33.415 "data_size": 63488 00:13:33.415 }, 00:13:33.415 { 00:13:33.415 "name": "BaseBdev3", 00:13:33.415 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:33.415 "is_configured": true, 00:13:33.415 "data_offset": 2048, 00:13:33.415 "data_size": 63488 00:13:33.415 }, 00:13:33.415 { 00:13:33.415 "name": "BaseBdev4", 00:13:33.415 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:33.415 "is_configured": true, 00:13:33.415 "data_offset": 2048, 00:13:33.415 "data_size": 63488 00:13:33.415 } 00:13:33.415 ] 00:13:33.415 }' 00:13:33.415 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.415 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.415 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.415 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.415 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:33.415 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.415 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.415 [2024-12-07 16:39:32.244819] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.415 [2024-12-07 16:39:32.303600] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:33.415 [2024-12-07 16:39:32.303705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.415 [2024-12-07 16:39:32.303746] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.415 [2024-12-07 16:39:32.303767] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:33.674 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.675 "name": "raid_bdev1", 00:13:33.675 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:33.675 "strip_size_kb": 0, 00:13:33.675 "state": "online", 00:13:33.675 "raid_level": "raid1", 00:13:33.675 "superblock": true, 00:13:33.675 "num_base_bdevs": 4, 00:13:33.675 "num_base_bdevs_discovered": 2, 00:13:33.675 "num_base_bdevs_operational": 2, 00:13:33.675 "base_bdevs_list": [ 00:13:33.675 { 00:13:33.675 "name": null, 00:13:33.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.675 "is_configured": false, 00:13:33.675 "data_offset": 0, 00:13:33.675 "data_size": 63488 00:13:33.675 }, 00:13:33.675 { 00:13:33.675 "name": null, 00:13:33.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.675 "is_configured": false, 00:13:33.675 "data_offset": 2048, 00:13:33.675 "data_size": 63488 00:13:33.675 }, 00:13:33.675 { 00:13:33.675 "name": "BaseBdev3", 00:13:33.675 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:33.675 "is_configured": true, 00:13:33.675 "data_offset": 2048, 00:13:33.675 "data_size": 63488 00:13:33.675 }, 00:13:33.675 { 00:13:33.675 "name": "BaseBdev4", 00:13:33.675 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:33.675 "is_configured": true, 00:13:33.675 "data_offset": 2048, 00:13:33.675 "data_size": 63488 00:13:33.675 } 00:13:33.675 ] 00:13:33.675 }' 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.675 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.934 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:33.934 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.934 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.934 [2024-12-07 16:39:32.750263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:33.934 [2024-12-07 16:39:32.750369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.934 [2024-12-07 16:39:32.750418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:33.934 [2024-12-07 16:39:32.750449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.934 [2024-12-07 16:39:32.750978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.934 [2024-12-07 16:39:32.751030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:33.934 [2024-12-07 16:39:32.751163] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:33.934 [2024-12-07 16:39:32.751201] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:33.934 [2024-12-07 16:39:32.751245] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:33.934 [2024-12-07 16:39:32.751383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.934 [2024-12-07 16:39:32.757294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:13:33.934 spare 00:13:33.934 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.934 16:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:33.934 [2024-12-07 16:39:32.759517] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:34.873 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.873 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.873 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.873 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.873 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.133 "name": "raid_bdev1", 00:13:35.133 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:35.133 "strip_size_kb": 0, 00:13:35.133 "state": "online", 00:13:35.133 "raid_level": "raid1", 00:13:35.133 "superblock": true, 00:13:35.133 "num_base_bdevs": 4, 00:13:35.133 "num_base_bdevs_discovered": 3, 00:13:35.133 "num_base_bdevs_operational": 3, 00:13:35.133 "process": { 00:13:35.133 "type": "rebuild", 00:13:35.133 "target": "spare", 00:13:35.133 "progress": { 00:13:35.133 "blocks": 20480, 00:13:35.133 "percent": 32 00:13:35.133 } 00:13:35.133 }, 00:13:35.133 "base_bdevs_list": [ 00:13:35.133 { 00:13:35.133 "name": "spare", 00:13:35.133 "uuid": "2c2468cb-5b50-5d6f-96d3-93c7a6b39ee3", 00:13:35.133 "is_configured": true, 00:13:35.133 "data_offset": 2048, 00:13:35.133 "data_size": 63488 00:13:35.133 }, 00:13:35.133 { 00:13:35.133 "name": null, 00:13:35.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.133 "is_configured": false, 00:13:35.133 "data_offset": 2048, 00:13:35.133 "data_size": 63488 00:13:35.133 }, 00:13:35.133 { 00:13:35.133 "name": "BaseBdev3", 00:13:35.133 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:35.133 "is_configured": true, 00:13:35.133 "data_offset": 2048, 00:13:35.133 "data_size": 63488 00:13:35.133 }, 00:13:35.133 { 00:13:35.133 "name": "BaseBdev4", 00:13:35.133 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:35.133 "is_configured": true, 00:13:35.133 "data_offset": 2048, 00:13:35.133 "data_size": 63488 00:13:35.133 } 00:13:35.133 ] 00:13:35.133 }' 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.133 [2024-12-07 16:39:33.919555] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.133 [2024-12-07 16:39:33.967157] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:35.133 [2024-12-07 16:39:33.967252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.133 [2024-12-07 16:39:33.967270] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.133 [2024-12-07 16:39:33.967288] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.133 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.134 16:39:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.134 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.392 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.392 "name": "raid_bdev1", 00:13:35.392 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:35.392 "strip_size_kb": 0, 00:13:35.392 "state": "online", 00:13:35.392 "raid_level": "raid1", 00:13:35.392 "superblock": true, 00:13:35.392 "num_base_bdevs": 4, 00:13:35.392 "num_base_bdevs_discovered": 2, 00:13:35.392 "num_base_bdevs_operational": 2, 00:13:35.392 "base_bdevs_list": [ 00:13:35.392 { 00:13:35.392 "name": null, 00:13:35.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.392 "is_configured": false, 00:13:35.393 "data_offset": 0, 00:13:35.393 "data_size": 63488 00:13:35.393 }, 00:13:35.393 { 00:13:35.393 "name": null, 00:13:35.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.393 "is_configured": false, 00:13:35.393 "data_offset": 2048, 00:13:35.393 "data_size": 63488 00:13:35.393 }, 00:13:35.393 { 00:13:35.393 "name": "BaseBdev3", 00:13:35.393 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:35.393 "is_configured": true, 00:13:35.393 "data_offset": 2048, 00:13:35.393 "data_size": 63488 00:13:35.393 }, 00:13:35.393 { 00:13:35.393 "name": "BaseBdev4", 00:13:35.393 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:35.393 "is_configured": true, 00:13:35.393 "data_offset": 2048, 00:13:35.393 "data_size": 63488 00:13:35.393 } 00:13:35.393 ] 00:13:35.393 }' 00:13:35.393 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.393 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.653 "name": "raid_bdev1", 00:13:35.653 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:35.653 "strip_size_kb": 0, 00:13:35.653 "state": "online", 00:13:35.653 "raid_level": "raid1", 00:13:35.653 "superblock": true, 00:13:35.653 "num_base_bdevs": 4, 00:13:35.653 "num_base_bdevs_discovered": 2, 00:13:35.653 "num_base_bdevs_operational": 2, 00:13:35.653 "base_bdevs_list": [ 00:13:35.653 { 00:13:35.653 "name": null, 00:13:35.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.653 "is_configured": false, 00:13:35.653 "data_offset": 0, 00:13:35.653 "data_size": 63488 00:13:35.653 }, 00:13:35.653 { 00:13:35.653 "name": null, 00:13:35.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.653 "is_configured": false, 00:13:35.653 "data_offset": 2048, 00:13:35.653 "data_size": 63488 00:13:35.653 }, 00:13:35.653 { 00:13:35.653 "name": "BaseBdev3", 00:13:35.653 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:35.653 "is_configured": true, 00:13:35.653 "data_offset": 2048, 00:13:35.653 "data_size": 63488 00:13:35.653 }, 00:13:35.653 { 00:13:35.653 "name": "BaseBdev4", 00:13:35.653 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:35.653 "is_configured": true, 00:13:35.653 "data_offset": 2048, 00:13:35.653 "data_size": 63488 00:13:35.653 } 00:13:35.653 ] 00:13:35.653 }' 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.653 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.925 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.925 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:35.925 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.925 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.925 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.925 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:35.926 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.926 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.926 [2024-12-07 16:39:34.568940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:35.926 [2024-12-07 16:39:34.569037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.926 [2024-12-07 16:39:34.569076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:35.926 [2024-12-07 16:39:34.569107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.926 [2024-12-07 16:39:34.569629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.926 [2024-12-07 16:39:34.569686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.926 [2024-12-07 16:39:34.569795] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:35.926 [2024-12-07 16:39:34.569839] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:35.926 [2024-12-07 16:39:34.569893] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:35.926 [2024-12-07 16:39:34.569910] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:35.926 BaseBdev1 00:13:35.926 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.926 16:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.867 "name": "raid_bdev1", 00:13:36.867 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:36.867 "strip_size_kb": 0, 00:13:36.867 "state": "online", 00:13:36.867 "raid_level": "raid1", 00:13:36.867 "superblock": true, 00:13:36.867 "num_base_bdevs": 4, 00:13:36.867 "num_base_bdevs_discovered": 2, 00:13:36.867 "num_base_bdevs_operational": 2, 00:13:36.867 "base_bdevs_list": [ 00:13:36.867 { 00:13:36.867 "name": null, 00:13:36.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.867 "is_configured": false, 00:13:36.867 "data_offset": 0, 00:13:36.867 "data_size": 63488 00:13:36.867 }, 00:13:36.867 { 00:13:36.867 "name": null, 00:13:36.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.867 "is_configured": false, 00:13:36.867 "data_offset": 2048, 00:13:36.867 "data_size": 63488 00:13:36.867 }, 00:13:36.867 { 00:13:36.867 "name": "BaseBdev3", 00:13:36.867 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:36.867 "is_configured": true, 00:13:36.867 "data_offset": 2048, 00:13:36.867 "data_size": 63488 00:13:36.867 }, 00:13:36.867 { 00:13:36.867 "name": "BaseBdev4", 00:13:36.867 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:36.867 "is_configured": true, 00:13:36.867 "data_offset": 2048, 00:13:36.867 "data_size": 63488 00:13:36.867 } 00:13:36.867 ] 00:13:36.867 }' 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.867 16:39:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.126 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.126 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.126 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.126 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.126 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.126 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.126 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.126 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.126 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.387 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.387 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.387 "name": "raid_bdev1", 00:13:37.387 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:37.387 "strip_size_kb": 0, 00:13:37.387 "state": "online", 00:13:37.387 "raid_level": "raid1", 00:13:37.387 "superblock": true, 00:13:37.387 "num_base_bdevs": 4, 00:13:37.387 "num_base_bdevs_discovered": 2, 00:13:37.387 "num_base_bdevs_operational": 2, 00:13:37.387 "base_bdevs_list": [ 00:13:37.387 { 00:13:37.387 "name": null, 00:13:37.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.387 "is_configured": false, 00:13:37.387 "data_offset": 0, 00:13:37.387 "data_size": 63488 00:13:37.387 }, 00:13:37.387 { 00:13:37.387 "name": null, 00:13:37.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.387 "is_configured": false, 00:13:37.387 "data_offset": 2048, 00:13:37.387 "data_size": 63488 00:13:37.387 }, 00:13:37.387 { 00:13:37.387 "name": "BaseBdev3", 00:13:37.387 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:37.387 "is_configured": true, 00:13:37.387 "data_offset": 2048, 00:13:37.387 "data_size": 63488 00:13:37.387 }, 00:13:37.387 { 00:13:37.387 "name": "BaseBdev4", 00:13:37.387 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:37.387 "is_configured": true, 00:13:37.387 "data_offset": 2048, 00:13:37.387 "data_size": 63488 00:13:37.387 } 00:13:37.387 ] 00:13:37.387 }' 00:13:37.387 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.387 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.387 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.388 [2024-12-07 16:39:36.154757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:37.388 [2024-12-07 16:39:36.155007] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:37.388 [2024-12-07 16:39:36.155065] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:37.388 request: 00:13:37.388 { 00:13:37.388 "base_bdev": "BaseBdev1", 00:13:37.388 "raid_bdev": "raid_bdev1", 00:13:37.388 "method": "bdev_raid_add_base_bdev", 00:13:37.388 "req_id": 1 00:13:37.388 } 00:13:37.388 Got JSON-RPC error response 00:13:37.388 response: 00:13:37.388 { 00:13:37.388 "code": -22, 00:13:37.388 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:37.388 } 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:37.388 16:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.377 "name": "raid_bdev1", 00:13:38.377 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:38.377 "strip_size_kb": 0, 00:13:38.377 "state": "online", 00:13:38.377 "raid_level": "raid1", 00:13:38.377 "superblock": true, 00:13:38.377 "num_base_bdevs": 4, 00:13:38.377 "num_base_bdevs_discovered": 2, 00:13:38.377 "num_base_bdevs_operational": 2, 00:13:38.377 "base_bdevs_list": [ 00:13:38.377 { 00:13:38.377 "name": null, 00:13:38.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.377 "is_configured": false, 00:13:38.377 "data_offset": 0, 00:13:38.377 "data_size": 63488 00:13:38.377 }, 00:13:38.377 { 00:13:38.377 "name": null, 00:13:38.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.377 "is_configured": false, 00:13:38.377 "data_offset": 2048, 00:13:38.377 "data_size": 63488 00:13:38.377 }, 00:13:38.377 { 00:13:38.377 "name": "BaseBdev3", 00:13:38.377 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:38.377 "is_configured": true, 00:13:38.377 "data_offset": 2048, 00:13:38.377 "data_size": 63488 00:13:38.377 }, 00:13:38.377 { 00:13:38.377 "name": "BaseBdev4", 00:13:38.377 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:38.377 "is_configured": true, 00:13:38.377 "data_offset": 2048, 00:13:38.377 "data_size": 63488 00:13:38.377 } 00:13:38.377 ] 00:13:38.377 }' 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.377 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.946 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.946 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.946 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.946 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.946 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.946 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.946 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.946 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.946 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.946 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.946 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.946 "name": "raid_bdev1", 00:13:38.946 "uuid": "f5eb52bd-bf57-4e12-beb1-fcf209b0e355", 00:13:38.946 "strip_size_kb": 0, 00:13:38.946 "state": "online", 00:13:38.946 "raid_level": "raid1", 00:13:38.946 "superblock": true, 00:13:38.946 "num_base_bdevs": 4, 00:13:38.946 "num_base_bdevs_discovered": 2, 00:13:38.946 "num_base_bdevs_operational": 2, 00:13:38.946 "base_bdevs_list": [ 00:13:38.946 { 00:13:38.946 "name": null, 00:13:38.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.946 "is_configured": false, 00:13:38.946 "data_offset": 0, 00:13:38.946 "data_size": 63488 00:13:38.946 }, 00:13:38.946 { 00:13:38.946 "name": null, 00:13:38.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.946 "is_configured": false, 00:13:38.946 "data_offset": 2048, 00:13:38.946 "data_size": 63488 00:13:38.946 }, 00:13:38.946 { 00:13:38.946 "name": "BaseBdev3", 00:13:38.946 "uuid": "01ec4bc0-a49c-57b8-ae00-79c26af0fbe2", 00:13:38.946 "is_configured": true, 00:13:38.947 "data_offset": 2048, 00:13:38.947 "data_size": 63488 00:13:38.947 }, 00:13:38.947 { 00:13:38.947 "name": "BaseBdev4", 00:13:38.947 "uuid": "2d06eab5-680e-5f35-8d2c-ea89d65a5b82", 00:13:38.947 "is_configured": true, 00:13:38.947 "data_offset": 2048, 00:13:38.947 "data_size": 63488 00:13:38.947 } 00:13:38.947 ] 00:13:38.947 }' 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 90063 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 90063 ']' 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 90063 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90063 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:38.947 killing process with pid 90063 00:13:38.947 Received shutdown signal, test time was about 17.987486 seconds 00:13:38.947 00:13:38.947 Latency(us) 00:13:38.947 [2024-12-07T16:39:37.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.947 [2024-12-07T16:39:37.846Z] =================================================================================================================== 00:13:38.947 [2024-12-07T16:39:37.846Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90063' 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 90063 00:13:38.947 [2024-12-07 16:39:37.735880] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.947 [2024-12-07 16:39:37.736029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.947 16:39:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 90063 00:13:38.947 [2024-12-07 16:39:37.736118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.947 [2024-12-07 16:39:37.736129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:38.947 [2024-12-07 16:39:37.822294] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.516 16:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:39.516 00:13:39.516 real 0m20.138s 00:13:39.516 user 0m26.526s 00:13:39.516 sys 0m2.743s 00:13:39.516 16:39:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.516 16:39:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.516 ************************************ 00:13:39.516 END TEST raid_rebuild_test_sb_io 00:13:39.516 ************************************ 00:13:39.516 16:39:38 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:39.516 16:39:38 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:39.516 16:39:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:39.516 16:39:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.516 16:39:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.516 ************************************ 00:13:39.516 START TEST raid5f_state_function_test 00:13:39.516 ************************************ 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90775 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90775' 00:13:39.516 Process raid pid: 90775 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90775 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90775 ']' 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.516 16:39:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.516 [2024-12-07 16:39:38.376042] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:39.516 [2024-12-07 16:39:38.376262] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.775 [2024-12-07 16:39:38.540611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.775 [2024-12-07 16:39:38.613564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.035 [2024-12-07 16:39:38.690572] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.035 [2024-12-07 16:39:38.690617] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.605 [2024-12-07 16:39:39.197820] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.605 [2024-12-07 16:39:39.197910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.605 [2024-12-07 16:39:39.197948] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.605 [2024-12-07 16:39:39.197995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.605 [2024-12-07 16:39:39.198026] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.605 [2024-12-07 16:39:39.198053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.605 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.606 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.606 "name": "Existed_Raid", 00:13:40.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.606 "strip_size_kb": 64, 00:13:40.606 "state": "configuring", 00:13:40.606 "raid_level": "raid5f", 00:13:40.606 "superblock": false, 00:13:40.606 "num_base_bdevs": 3, 00:13:40.606 "num_base_bdevs_discovered": 0, 00:13:40.606 "num_base_bdevs_operational": 3, 00:13:40.606 "base_bdevs_list": [ 00:13:40.606 { 00:13:40.606 "name": "BaseBdev1", 00:13:40.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.606 "is_configured": false, 00:13:40.606 "data_offset": 0, 00:13:40.606 "data_size": 0 00:13:40.606 }, 00:13:40.606 { 00:13:40.606 "name": "BaseBdev2", 00:13:40.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.606 "is_configured": false, 00:13:40.606 "data_offset": 0, 00:13:40.606 "data_size": 0 00:13:40.606 }, 00:13:40.606 { 00:13:40.606 "name": "BaseBdev3", 00:13:40.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.606 "is_configured": false, 00:13:40.606 "data_offset": 0, 00:13:40.606 "data_size": 0 00:13:40.606 } 00:13:40.606 ] 00:13:40.606 }' 00:13:40.606 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.606 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.866 [2024-12-07 16:39:39.648943] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:40.866 [2024-12-07 16:39:39.649023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.866 [2024-12-07 16:39:39.660962] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.866 [2024-12-07 16:39:39.661037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.866 [2024-12-07 16:39:39.661062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.866 [2024-12-07 16:39:39.661085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.866 [2024-12-07 16:39:39.661102] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.866 [2024-12-07 16:39:39.661123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.866 [2024-12-07 16:39:39.687952] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.866 BaseBdev1 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.866 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.866 [ 00:13:40.866 { 00:13:40.866 "name": "BaseBdev1", 00:13:40.866 "aliases": [ 00:13:40.866 "deecc1a6-4478-4563-ae4e-354e9b7c72af" 00:13:40.866 ], 00:13:40.867 "product_name": "Malloc disk", 00:13:40.867 "block_size": 512, 00:13:40.867 "num_blocks": 65536, 00:13:40.867 "uuid": "deecc1a6-4478-4563-ae4e-354e9b7c72af", 00:13:40.867 "assigned_rate_limits": { 00:13:40.867 "rw_ios_per_sec": 0, 00:13:40.867 "rw_mbytes_per_sec": 0, 00:13:40.867 "r_mbytes_per_sec": 0, 00:13:40.867 "w_mbytes_per_sec": 0 00:13:40.867 }, 00:13:40.867 "claimed": true, 00:13:40.867 "claim_type": "exclusive_write", 00:13:40.867 "zoned": false, 00:13:40.867 "supported_io_types": { 00:13:40.867 "read": true, 00:13:40.867 "write": true, 00:13:40.867 "unmap": true, 00:13:40.867 "flush": true, 00:13:40.867 "reset": true, 00:13:40.867 "nvme_admin": false, 00:13:40.867 "nvme_io": false, 00:13:40.867 "nvme_io_md": false, 00:13:40.867 "write_zeroes": true, 00:13:40.867 "zcopy": true, 00:13:40.867 "get_zone_info": false, 00:13:40.867 "zone_management": false, 00:13:40.867 "zone_append": false, 00:13:40.867 "compare": false, 00:13:40.867 "compare_and_write": false, 00:13:40.867 "abort": true, 00:13:40.867 "seek_hole": false, 00:13:40.867 "seek_data": false, 00:13:40.867 "copy": true, 00:13:40.867 "nvme_iov_md": false 00:13:40.867 }, 00:13:40.867 "memory_domains": [ 00:13:40.867 { 00:13:40.867 "dma_device_id": "system", 00:13:40.867 "dma_device_type": 1 00:13:40.867 }, 00:13:40.867 { 00:13:40.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.867 "dma_device_type": 2 00:13:40.867 } 00:13:40.867 ], 00:13:40.867 "driver_specific": {} 00:13:40.867 } 00:13:40.867 ] 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.867 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.127 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.127 "name": "Existed_Raid", 00:13:41.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.127 "strip_size_kb": 64, 00:13:41.127 "state": "configuring", 00:13:41.127 "raid_level": "raid5f", 00:13:41.127 "superblock": false, 00:13:41.127 "num_base_bdevs": 3, 00:13:41.127 "num_base_bdevs_discovered": 1, 00:13:41.127 "num_base_bdevs_operational": 3, 00:13:41.127 "base_bdevs_list": [ 00:13:41.127 { 00:13:41.127 "name": "BaseBdev1", 00:13:41.127 "uuid": "deecc1a6-4478-4563-ae4e-354e9b7c72af", 00:13:41.127 "is_configured": true, 00:13:41.127 "data_offset": 0, 00:13:41.127 "data_size": 65536 00:13:41.127 }, 00:13:41.127 { 00:13:41.127 "name": "BaseBdev2", 00:13:41.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.127 "is_configured": false, 00:13:41.127 "data_offset": 0, 00:13:41.127 "data_size": 0 00:13:41.127 }, 00:13:41.127 { 00:13:41.127 "name": "BaseBdev3", 00:13:41.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.127 "is_configured": false, 00:13:41.127 "data_offset": 0, 00:13:41.127 "data_size": 0 00:13:41.127 } 00:13:41.127 ] 00:13:41.127 }' 00:13:41.127 16:39:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.127 16:39:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.387 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:41.387 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.387 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.387 [2024-12-07 16:39:40.211404] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.388 [2024-12-07 16:39:40.211492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.388 [2024-12-07 16:39:40.223376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.388 [2024-12-07 16:39:40.225579] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:41.388 [2024-12-07 16:39:40.225657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:41.388 [2024-12-07 16:39:40.225670] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:41.388 [2024-12-07 16:39:40.225680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.388 "name": "Existed_Raid", 00:13:41.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.388 "strip_size_kb": 64, 00:13:41.388 "state": "configuring", 00:13:41.388 "raid_level": "raid5f", 00:13:41.388 "superblock": false, 00:13:41.388 "num_base_bdevs": 3, 00:13:41.388 "num_base_bdevs_discovered": 1, 00:13:41.388 "num_base_bdevs_operational": 3, 00:13:41.388 "base_bdevs_list": [ 00:13:41.388 { 00:13:41.388 "name": "BaseBdev1", 00:13:41.388 "uuid": "deecc1a6-4478-4563-ae4e-354e9b7c72af", 00:13:41.388 "is_configured": true, 00:13:41.388 "data_offset": 0, 00:13:41.388 "data_size": 65536 00:13:41.388 }, 00:13:41.388 { 00:13:41.388 "name": "BaseBdev2", 00:13:41.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.388 "is_configured": false, 00:13:41.388 "data_offset": 0, 00:13:41.388 "data_size": 0 00:13:41.388 }, 00:13:41.388 { 00:13:41.388 "name": "BaseBdev3", 00:13:41.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.388 "is_configured": false, 00:13:41.388 "data_offset": 0, 00:13:41.388 "data_size": 0 00:13:41.388 } 00:13:41.388 ] 00:13:41.388 }' 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.388 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.958 [2024-12-07 16:39:40.671677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.958 BaseBdev2 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.958 [ 00:13:41.958 { 00:13:41.958 "name": "BaseBdev2", 00:13:41.958 "aliases": [ 00:13:41.958 "cbf05a6a-42ff-4bbb-ad5b-4c13fbfb63ae" 00:13:41.958 ], 00:13:41.958 "product_name": "Malloc disk", 00:13:41.958 "block_size": 512, 00:13:41.958 "num_blocks": 65536, 00:13:41.958 "uuid": "cbf05a6a-42ff-4bbb-ad5b-4c13fbfb63ae", 00:13:41.958 "assigned_rate_limits": { 00:13:41.958 "rw_ios_per_sec": 0, 00:13:41.958 "rw_mbytes_per_sec": 0, 00:13:41.958 "r_mbytes_per_sec": 0, 00:13:41.958 "w_mbytes_per_sec": 0 00:13:41.958 }, 00:13:41.958 "claimed": true, 00:13:41.958 "claim_type": "exclusive_write", 00:13:41.958 "zoned": false, 00:13:41.958 "supported_io_types": { 00:13:41.958 "read": true, 00:13:41.958 "write": true, 00:13:41.958 "unmap": true, 00:13:41.958 "flush": true, 00:13:41.958 "reset": true, 00:13:41.958 "nvme_admin": false, 00:13:41.958 "nvme_io": false, 00:13:41.958 "nvme_io_md": false, 00:13:41.958 "write_zeroes": true, 00:13:41.958 "zcopy": true, 00:13:41.958 "get_zone_info": false, 00:13:41.958 "zone_management": false, 00:13:41.958 "zone_append": false, 00:13:41.958 "compare": false, 00:13:41.958 "compare_and_write": false, 00:13:41.958 "abort": true, 00:13:41.958 "seek_hole": false, 00:13:41.958 "seek_data": false, 00:13:41.958 "copy": true, 00:13:41.958 "nvme_iov_md": false 00:13:41.958 }, 00:13:41.958 "memory_domains": [ 00:13:41.958 { 00:13:41.958 "dma_device_id": "system", 00:13:41.958 "dma_device_type": 1 00:13:41.958 }, 00:13:41.958 { 00:13:41.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.958 "dma_device_type": 2 00:13:41.958 } 00:13:41.958 ], 00:13:41.958 "driver_specific": {} 00:13:41.958 } 00:13:41.958 ] 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:41.958 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.959 "name": "Existed_Raid", 00:13:41.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.959 "strip_size_kb": 64, 00:13:41.959 "state": "configuring", 00:13:41.959 "raid_level": "raid5f", 00:13:41.959 "superblock": false, 00:13:41.959 "num_base_bdevs": 3, 00:13:41.959 "num_base_bdevs_discovered": 2, 00:13:41.959 "num_base_bdevs_operational": 3, 00:13:41.959 "base_bdevs_list": [ 00:13:41.959 { 00:13:41.959 "name": "BaseBdev1", 00:13:41.959 "uuid": "deecc1a6-4478-4563-ae4e-354e9b7c72af", 00:13:41.959 "is_configured": true, 00:13:41.959 "data_offset": 0, 00:13:41.959 "data_size": 65536 00:13:41.959 }, 00:13:41.959 { 00:13:41.959 "name": "BaseBdev2", 00:13:41.959 "uuid": "cbf05a6a-42ff-4bbb-ad5b-4c13fbfb63ae", 00:13:41.959 "is_configured": true, 00:13:41.959 "data_offset": 0, 00:13:41.959 "data_size": 65536 00:13:41.959 }, 00:13:41.959 { 00:13:41.959 "name": "BaseBdev3", 00:13:41.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.959 "is_configured": false, 00:13:41.959 "data_offset": 0, 00:13:41.959 "data_size": 0 00:13:41.959 } 00:13:41.959 ] 00:13:41.959 }' 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.959 16:39:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.529 [2024-12-07 16:39:41.179686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.529 [2024-12-07 16:39:41.179822] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:42.529 [2024-12-07 16:39:41.179878] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:42.529 [2024-12-07 16:39:41.180247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:42.529 [2024-12-07 16:39:41.180779] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:42.529 [2024-12-07 16:39:41.180827] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:42.529 [2024-12-07 16:39:41.181108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.529 BaseBdev3 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.529 [ 00:13:42.529 { 00:13:42.529 "name": "BaseBdev3", 00:13:42.529 "aliases": [ 00:13:42.529 "5fbf6e2e-6cc8-4f73-afd8-9a2e82fbc392" 00:13:42.529 ], 00:13:42.529 "product_name": "Malloc disk", 00:13:42.529 "block_size": 512, 00:13:42.529 "num_blocks": 65536, 00:13:42.529 "uuid": "5fbf6e2e-6cc8-4f73-afd8-9a2e82fbc392", 00:13:42.529 "assigned_rate_limits": { 00:13:42.529 "rw_ios_per_sec": 0, 00:13:42.529 "rw_mbytes_per_sec": 0, 00:13:42.529 "r_mbytes_per_sec": 0, 00:13:42.529 "w_mbytes_per_sec": 0 00:13:42.529 }, 00:13:42.529 "claimed": true, 00:13:42.529 "claim_type": "exclusive_write", 00:13:42.529 "zoned": false, 00:13:42.529 "supported_io_types": { 00:13:42.529 "read": true, 00:13:42.529 "write": true, 00:13:42.529 "unmap": true, 00:13:42.529 "flush": true, 00:13:42.529 "reset": true, 00:13:42.529 "nvme_admin": false, 00:13:42.529 "nvme_io": false, 00:13:42.529 "nvme_io_md": false, 00:13:42.529 "write_zeroes": true, 00:13:42.529 "zcopy": true, 00:13:42.529 "get_zone_info": false, 00:13:42.529 "zone_management": false, 00:13:42.529 "zone_append": false, 00:13:42.529 "compare": false, 00:13:42.529 "compare_and_write": false, 00:13:42.529 "abort": true, 00:13:42.529 "seek_hole": false, 00:13:42.529 "seek_data": false, 00:13:42.529 "copy": true, 00:13:42.529 "nvme_iov_md": false 00:13:42.529 }, 00:13:42.529 "memory_domains": [ 00:13:42.529 { 00:13:42.529 "dma_device_id": "system", 00:13:42.529 "dma_device_type": 1 00:13:42.529 }, 00:13:42.529 { 00:13:42.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.529 "dma_device_type": 2 00:13:42.529 } 00:13:42.529 ], 00:13:42.529 "driver_specific": {} 00:13:42.529 } 00:13:42.529 ] 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.529 "name": "Existed_Raid", 00:13:42.529 "uuid": "5fd9c47e-45df-45ac-9824-71970deba5d5", 00:13:42.529 "strip_size_kb": 64, 00:13:42.529 "state": "online", 00:13:42.529 "raid_level": "raid5f", 00:13:42.529 "superblock": false, 00:13:42.529 "num_base_bdevs": 3, 00:13:42.529 "num_base_bdevs_discovered": 3, 00:13:42.529 "num_base_bdevs_operational": 3, 00:13:42.529 "base_bdevs_list": [ 00:13:42.529 { 00:13:42.529 "name": "BaseBdev1", 00:13:42.529 "uuid": "deecc1a6-4478-4563-ae4e-354e9b7c72af", 00:13:42.529 "is_configured": true, 00:13:42.529 "data_offset": 0, 00:13:42.529 "data_size": 65536 00:13:42.529 }, 00:13:42.529 { 00:13:42.529 "name": "BaseBdev2", 00:13:42.529 "uuid": "cbf05a6a-42ff-4bbb-ad5b-4c13fbfb63ae", 00:13:42.529 "is_configured": true, 00:13:42.529 "data_offset": 0, 00:13:42.529 "data_size": 65536 00:13:42.529 }, 00:13:42.529 { 00:13:42.529 "name": "BaseBdev3", 00:13:42.529 "uuid": "5fbf6e2e-6cc8-4f73-afd8-9a2e82fbc392", 00:13:42.529 "is_configured": true, 00:13:42.529 "data_offset": 0, 00:13:42.529 "data_size": 65536 00:13:42.529 } 00:13:42.529 ] 00:13:42.529 }' 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.529 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.789 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:42.789 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:42.789 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:42.789 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:42.789 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:42.789 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:42.789 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:42.789 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:42.789 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.789 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.789 [2024-12-07 16:39:41.667418] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.049 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.049 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:43.049 "name": "Existed_Raid", 00:13:43.049 "aliases": [ 00:13:43.049 "5fd9c47e-45df-45ac-9824-71970deba5d5" 00:13:43.049 ], 00:13:43.049 "product_name": "Raid Volume", 00:13:43.049 "block_size": 512, 00:13:43.049 "num_blocks": 131072, 00:13:43.049 "uuid": "5fd9c47e-45df-45ac-9824-71970deba5d5", 00:13:43.049 "assigned_rate_limits": { 00:13:43.049 "rw_ios_per_sec": 0, 00:13:43.049 "rw_mbytes_per_sec": 0, 00:13:43.049 "r_mbytes_per_sec": 0, 00:13:43.049 "w_mbytes_per_sec": 0 00:13:43.049 }, 00:13:43.049 "claimed": false, 00:13:43.049 "zoned": false, 00:13:43.049 "supported_io_types": { 00:13:43.049 "read": true, 00:13:43.049 "write": true, 00:13:43.049 "unmap": false, 00:13:43.049 "flush": false, 00:13:43.049 "reset": true, 00:13:43.049 "nvme_admin": false, 00:13:43.049 "nvme_io": false, 00:13:43.049 "nvme_io_md": false, 00:13:43.049 "write_zeroes": true, 00:13:43.049 "zcopy": false, 00:13:43.049 "get_zone_info": false, 00:13:43.049 "zone_management": false, 00:13:43.049 "zone_append": false, 00:13:43.049 "compare": false, 00:13:43.049 "compare_and_write": false, 00:13:43.049 "abort": false, 00:13:43.049 "seek_hole": false, 00:13:43.049 "seek_data": false, 00:13:43.049 "copy": false, 00:13:43.049 "nvme_iov_md": false 00:13:43.049 }, 00:13:43.049 "driver_specific": { 00:13:43.049 "raid": { 00:13:43.049 "uuid": "5fd9c47e-45df-45ac-9824-71970deba5d5", 00:13:43.049 "strip_size_kb": 64, 00:13:43.049 "state": "online", 00:13:43.049 "raid_level": "raid5f", 00:13:43.049 "superblock": false, 00:13:43.049 "num_base_bdevs": 3, 00:13:43.049 "num_base_bdevs_discovered": 3, 00:13:43.049 "num_base_bdevs_operational": 3, 00:13:43.049 "base_bdevs_list": [ 00:13:43.049 { 00:13:43.049 "name": "BaseBdev1", 00:13:43.049 "uuid": "deecc1a6-4478-4563-ae4e-354e9b7c72af", 00:13:43.049 "is_configured": true, 00:13:43.049 "data_offset": 0, 00:13:43.049 "data_size": 65536 00:13:43.049 }, 00:13:43.049 { 00:13:43.049 "name": "BaseBdev2", 00:13:43.049 "uuid": "cbf05a6a-42ff-4bbb-ad5b-4c13fbfb63ae", 00:13:43.049 "is_configured": true, 00:13:43.049 "data_offset": 0, 00:13:43.049 "data_size": 65536 00:13:43.049 }, 00:13:43.049 { 00:13:43.049 "name": "BaseBdev3", 00:13:43.049 "uuid": "5fbf6e2e-6cc8-4f73-afd8-9a2e82fbc392", 00:13:43.049 "is_configured": true, 00:13:43.049 "data_offset": 0, 00:13:43.049 "data_size": 65536 00:13:43.049 } 00:13:43.049 ] 00:13:43.049 } 00:13:43.049 } 00:13:43.049 }' 00:13:43.049 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.049 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:43.049 BaseBdev2 00:13:43.049 BaseBdev3' 00:13:43.049 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.049 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:43.049 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.049 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:43.049 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.049 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.049 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.050 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.050 [2024-12-07 16:39:41.942789] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.310 16:39:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.310 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.310 "name": "Existed_Raid", 00:13:43.310 "uuid": "5fd9c47e-45df-45ac-9824-71970deba5d5", 00:13:43.310 "strip_size_kb": 64, 00:13:43.310 "state": "online", 00:13:43.310 "raid_level": "raid5f", 00:13:43.310 "superblock": false, 00:13:43.310 "num_base_bdevs": 3, 00:13:43.310 "num_base_bdevs_discovered": 2, 00:13:43.310 "num_base_bdevs_operational": 2, 00:13:43.310 "base_bdevs_list": [ 00:13:43.310 { 00:13:43.310 "name": null, 00:13:43.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.310 "is_configured": false, 00:13:43.310 "data_offset": 0, 00:13:43.310 "data_size": 65536 00:13:43.310 }, 00:13:43.310 { 00:13:43.310 "name": "BaseBdev2", 00:13:43.310 "uuid": "cbf05a6a-42ff-4bbb-ad5b-4c13fbfb63ae", 00:13:43.310 "is_configured": true, 00:13:43.310 "data_offset": 0, 00:13:43.310 "data_size": 65536 00:13:43.310 }, 00:13:43.310 { 00:13:43.310 "name": "BaseBdev3", 00:13:43.310 "uuid": "5fbf6e2e-6cc8-4f73-afd8-9a2e82fbc392", 00:13:43.310 "is_configured": true, 00:13:43.310 "data_offset": 0, 00:13:43.310 "data_size": 65536 00:13:43.310 } 00:13:43.310 ] 00:13:43.310 }' 00:13:43.310 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.310 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.570 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:43.570 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:43.570 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:43.571 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.571 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.571 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.571 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.571 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:43.571 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:43.571 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:43.571 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.571 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.571 [2024-12-07 16:39:42.466820] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:43.571 [2024-12-07 16:39:42.466975] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.831 [2024-12-07 16:39:42.487015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.831 [2024-12-07 16:39:42.550935] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:43.831 [2024-12-07 16:39:42.551019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.831 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.831 BaseBdev2 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.832 [ 00:13:43.832 { 00:13:43.832 "name": "BaseBdev2", 00:13:43.832 "aliases": [ 00:13:43.832 "86e340c1-24a9-4295-8938-8d0c79dbf613" 00:13:43.832 ], 00:13:43.832 "product_name": "Malloc disk", 00:13:43.832 "block_size": 512, 00:13:43.832 "num_blocks": 65536, 00:13:43.832 "uuid": "86e340c1-24a9-4295-8938-8d0c79dbf613", 00:13:43.832 "assigned_rate_limits": { 00:13:43.832 "rw_ios_per_sec": 0, 00:13:43.832 "rw_mbytes_per_sec": 0, 00:13:43.832 "r_mbytes_per_sec": 0, 00:13:43.832 "w_mbytes_per_sec": 0 00:13:43.832 }, 00:13:43.832 "claimed": false, 00:13:43.832 "zoned": false, 00:13:43.832 "supported_io_types": { 00:13:43.832 "read": true, 00:13:43.832 "write": true, 00:13:43.832 "unmap": true, 00:13:43.832 "flush": true, 00:13:43.832 "reset": true, 00:13:43.832 "nvme_admin": false, 00:13:43.832 "nvme_io": false, 00:13:43.832 "nvme_io_md": false, 00:13:43.832 "write_zeroes": true, 00:13:43.832 "zcopy": true, 00:13:43.832 "get_zone_info": false, 00:13:43.832 "zone_management": false, 00:13:43.832 "zone_append": false, 00:13:43.832 "compare": false, 00:13:43.832 "compare_and_write": false, 00:13:43.832 "abort": true, 00:13:43.832 "seek_hole": false, 00:13:43.832 "seek_data": false, 00:13:43.832 "copy": true, 00:13:43.832 "nvme_iov_md": false 00:13:43.832 }, 00:13:43.832 "memory_domains": [ 00:13:43.832 { 00:13:43.832 "dma_device_id": "system", 00:13:43.832 "dma_device_type": 1 00:13:43.832 }, 00:13:43.832 { 00:13:43.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.832 "dma_device_type": 2 00:13:43.832 } 00:13:43.832 ], 00:13:43.832 "driver_specific": {} 00:13:43.832 } 00:13:43.832 ] 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.832 BaseBdev3 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.832 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.832 [ 00:13:43.832 { 00:13:43.832 "name": "BaseBdev3", 00:13:43.832 "aliases": [ 00:13:43.832 "cae4016c-fbcf-4c3f-afde-877e61781814" 00:13:43.832 ], 00:13:43.832 "product_name": "Malloc disk", 00:13:43.832 "block_size": 512, 00:13:43.832 "num_blocks": 65536, 00:13:43.832 "uuid": "cae4016c-fbcf-4c3f-afde-877e61781814", 00:13:44.092 "assigned_rate_limits": { 00:13:44.092 "rw_ios_per_sec": 0, 00:13:44.092 "rw_mbytes_per_sec": 0, 00:13:44.092 "r_mbytes_per_sec": 0, 00:13:44.092 "w_mbytes_per_sec": 0 00:13:44.092 }, 00:13:44.092 "claimed": false, 00:13:44.092 "zoned": false, 00:13:44.092 "supported_io_types": { 00:13:44.092 "read": true, 00:13:44.092 "write": true, 00:13:44.092 "unmap": true, 00:13:44.092 "flush": true, 00:13:44.092 "reset": true, 00:13:44.092 "nvme_admin": false, 00:13:44.092 "nvme_io": false, 00:13:44.092 "nvme_io_md": false, 00:13:44.092 "write_zeroes": true, 00:13:44.092 "zcopy": true, 00:13:44.092 "get_zone_info": false, 00:13:44.092 "zone_management": false, 00:13:44.092 "zone_append": false, 00:13:44.092 "compare": false, 00:13:44.092 "compare_and_write": false, 00:13:44.092 "abort": true, 00:13:44.092 "seek_hole": false, 00:13:44.092 "seek_data": false, 00:13:44.092 "copy": true, 00:13:44.092 "nvme_iov_md": false 00:13:44.092 }, 00:13:44.092 "memory_domains": [ 00:13:44.092 { 00:13:44.092 "dma_device_id": "system", 00:13:44.092 "dma_device_type": 1 00:13:44.092 }, 00:13:44.092 { 00:13:44.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.093 "dma_device_type": 2 00:13:44.093 } 00:13:44.093 ], 00:13:44.093 "driver_specific": {} 00:13:44.093 } 00:13:44.093 ] 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.093 [2024-12-07 16:39:42.744309] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.093 [2024-12-07 16:39:42.744406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.093 [2024-12-07 16:39:42.744452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.093 [2024-12-07 16:39:42.746580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.093 "name": "Existed_Raid", 00:13:44.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.093 "strip_size_kb": 64, 00:13:44.093 "state": "configuring", 00:13:44.093 "raid_level": "raid5f", 00:13:44.093 "superblock": false, 00:13:44.093 "num_base_bdevs": 3, 00:13:44.093 "num_base_bdevs_discovered": 2, 00:13:44.093 "num_base_bdevs_operational": 3, 00:13:44.093 "base_bdevs_list": [ 00:13:44.093 { 00:13:44.093 "name": "BaseBdev1", 00:13:44.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.093 "is_configured": false, 00:13:44.093 "data_offset": 0, 00:13:44.093 "data_size": 0 00:13:44.093 }, 00:13:44.093 { 00:13:44.093 "name": "BaseBdev2", 00:13:44.093 "uuid": "86e340c1-24a9-4295-8938-8d0c79dbf613", 00:13:44.093 "is_configured": true, 00:13:44.093 "data_offset": 0, 00:13:44.093 "data_size": 65536 00:13:44.093 }, 00:13:44.093 { 00:13:44.093 "name": "BaseBdev3", 00:13:44.093 "uuid": "cae4016c-fbcf-4c3f-afde-877e61781814", 00:13:44.093 "is_configured": true, 00:13:44.093 "data_offset": 0, 00:13:44.093 "data_size": 65536 00:13:44.093 } 00:13:44.093 ] 00:13:44.093 }' 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.093 16:39:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.352 [2024-12-07 16:39:43.175519] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.352 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.353 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.353 "name": "Existed_Raid", 00:13:44.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.353 "strip_size_kb": 64, 00:13:44.353 "state": "configuring", 00:13:44.353 "raid_level": "raid5f", 00:13:44.353 "superblock": false, 00:13:44.353 "num_base_bdevs": 3, 00:13:44.353 "num_base_bdevs_discovered": 1, 00:13:44.353 "num_base_bdevs_operational": 3, 00:13:44.353 "base_bdevs_list": [ 00:13:44.353 { 00:13:44.353 "name": "BaseBdev1", 00:13:44.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.353 "is_configured": false, 00:13:44.353 "data_offset": 0, 00:13:44.353 "data_size": 0 00:13:44.353 }, 00:13:44.353 { 00:13:44.353 "name": null, 00:13:44.353 "uuid": "86e340c1-24a9-4295-8938-8d0c79dbf613", 00:13:44.353 "is_configured": false, 00:13:44.353 "data_offset": 0, 00:13:44.353 "data_size": 65536 00:13:44.353 }, 00:13:44.353 { 00:13:44.353 "name": "BaseBdev3", 00:13:44.353 "uuid": "cae4016c-fbcf-4c3f-afde-877e61781814", 00:13:44.353 "is_configured": true, 00:13:44.353 "data_offset": 0, 00:13:44.353 "data_size": 65536 00:13:44.353 } 00:13:44.353 ] 00:13:44.353 }' 00:13:44.353 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.353 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.922 [2024-12-07 16:39:43.655632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.922 BaseBdev1 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.922 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.922 [ 00:13:44.923 { 00:13:44.923 "name": "BaseBdev1", 00:13:44.923 "aliases": [ 00:13:44.923 "56cbf07e-c645-42c6-a965-732acffc585a" 00:13:44.923 ], 00:13:44.923 "product_name": "Malloc disk", 00:13:44.923 "block_size": 512, 00:13:44.923 "num_blocks": 65536, 00:13:44.923 "uuid": "56cbf07e-c645-42c6-a965-732acffc585a", 00:13:44.923 "assigned_rate_limits": { 00:13:44.923 "rw_ios_per_sec": 0, 00:13:44.923 "rw_mbytes_per_sec": 0, 00:13:44.923 "r_mbytes_per_sec": 0, 00:13:44.923 "w_mbytes_per_sec": 0 00:13:44.923 }, 00:13:44.923 "claimed": true, 00:13:44.923 "claim_type": "exclusive_write", 00:13:44.923 "zoned": false, 00:13:44.923 "supported_io_types": { 00:13:44.923 "read": true, 00:13:44.923 "write": true, 00:13:44.923 "unmap": true, 00:13:44.923 "flush": true, 00:13:44.923 "reset": true, 00:13:44.923 "nvme_admin": false, 00:13:44.923 "nvme_io": false, 00:13:44.923 "nvme_io_md": false, 00:13:44.923 "write_zeroes": true, 00:13:44.923 "zcopy": true, 00:13:44.923 "get_zone_info": false, 00:13:44.923 "zone_management": false, 00:13:44.923 "zone_append": false, 00:13:44.923 "compare": false, 00:13:44.923 "compare_and_write": false, 00:13:44.923 "abort": true, 00:13:44.923 "seek_hole": false, 00:13:44.923 "seek_data": false, 00:13:44.923 "copy": true, 00:13:44.923 "nvme_iov_md": false 00:13:44.923 }, 00:13:44.923 "memory_domains": [ 00:13:44.923 { 00:13:44.923 "dma_device_id": "system", 00:13:44.923 "dma_device_type": 1 00:13:44.923 }, 00:13:44.923 { 00:13:44.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.923 "dma_device_type": 2 00:13:44.923 } 00:13:44.923 ], 00:13:44.923 "driver_specific": {} 00:13:44.923 } 00:13:44.923 ] 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.923 "name": "Existed_Raid", 00:13:44.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.923 "strip_size_kb": 64, 00:13:44.923 "state": "configuring", 00:13:44.923 "raid_level": "raid5f", 00:13:44.923 "superblock": false, 00:13:44.923 "num_base_bdevs": 3, 00:13:44.923 "num_base_bdevs_discovered": 2, 00:13:44.923 "num_base_bdevs_operational": 3, 00:13:44.923 "base_bdevs_list": [ 00:13:44.923 { 00:13:44.923 "name": "BaseBdev1", 00:13:44.923 "uuid": "56cbf07e-c645-42c6-a965-732acffc585a", 00:13:44.923 "is_configured": true, 00:13:44.923 "data_offset": 0, 00:13:44.923 "data_size": 65536 00:13:44.923 }, 00:13:44.923 { 00:13:44.923 "name": null, 00:13:44.923 "uuid": "86e340c1-24a9-4295-8938-8d0c79dbf613", 00:13:44.923 "is_configured": false, 00:13:44.923 "data_offset": 0, 00:13:44.923 "data_size": 65536 00:13:44.923 }, 00:13:44.923 { 00:13:44.923 "name": "BaseBdev3", 00:13:44.923 "uuid": "cae4016c-fbcf-4c3f-afde-877e61781814", 00:13:44.923 "is_configured": true, 00:13:44.923 "data_offset": 0, 00:13:44.923 "data_size": 65536 00:13:44.923 } 00:13:44.923 ] 00:13:44.923 }' 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.923 16:39:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.494 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:45.494 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.494 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.494 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.494 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.495 [2024-12-07 16:39:44.147018] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.495 "name": "Existed_Raid", 00:13:45.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.495 "strip_size_kb": 64, 00:13:45.495 "state": "configuring", 00:13:45.495 "raid_level": "raid5f", 00:13:45.495 "superblock": false, 00:13:45.495 "num_base_bdevs": 3, 00:13:45.495 "num_base_bdevs_discovered": 1, 00:13:45.495 "num_base_bdevs_operational": 3, 00:13:45.495 "base_bdevs_list": [ 00:13:45.495 { 00:13:45.495 "name": "BaseBdev1", 00:13:45.495 "uuid": "56cbf07e-c645-42c6-a965-732acffc585a", 00:13:45.495 "is_configured": true, 00:13:45.495 "data_offset": 0, 00:13:45.495 "data_size": 65536 00:13:45.495 }, 00:13:45.495 { 00:13:45.495 "name": null, 00:13:45.495 "uuid": "86e340c1-24a9-4295-8938-8d0c79dbf613", 00:13:45.495 "is_configured": false, 00:13:45.495 "data_offset": 0, 00:13:45.495 "data_size": 65536 00:13:45.495 }, 00:13:45.495 { 00:13:45.495 "name": null, 00:13:45.495 "uuid": "cae4016c-fbcf-4c3f-afde-877e61781814", 00:13:45.495 "is_configured": false, 00:13:45.495 "data_offset": 0, 00:13:45.495 "data_size": 65536 00:13:45.495 } 00:13:45.495 ] 00:13:45.495 }' 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.495 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.755 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:45.755 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.755 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.755 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.755 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.755 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:45.755 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:45.755 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.755 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.755 [2024-12-07 16:39:44.646184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.755 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.755 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:45.755 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.755 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.756 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.016 "name": "Existed_Raid", 00:13:46.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.016 "strip_size_kb": 64, 00:13:46.016 "state": "configuring", 00:13:46.016 "raid_level": "raid5f", 00:13:46.016 "superblock": false, 00:13:46.016 "num_base_bdevs": 3, 00:13:46.016 "num_base_bdevs_discovered": 2, 00:13:46.016 "num_base_bdevs_operational": 3, 00:13:46.016 "base_bdevs_list": [ 00:13:46.016 { 00:13:46.016 "name": "BaseBdev1", 00:13:46.016 "uuid": "56cbf07e-c645-42c6-a965-732acffc585a", 00:13:46.016 "is_configured": true, 00:13:46.016 "data_offset": 0, 00:13:46.016 "data_size": 65536 00:13:46.016 }, 00:13:46.016 { 00:13:46.016 "name": null, 00:13:46.016 "uuid": "86e340c1-24a9-4295-8938-8d0c79dbf613", 00:13:46.016 "is_configured": false, 00:13:46.016 "data_offset": 0, 00:13:46.016 "data_size": 65536 00:13:46.016 }, 00:13:46.016 { 00:13:46.016 "name": "BaseBdev3", 00:13:46.016 "uuid": "cae4016c-fbcf-4c3f-afde-877e61781814", 00:13:46.016 "is_configured": true, 00:13:46.016 "data_offset": 0, 00:13:46.016 "data_size": 65536 00:13:46.016 } 00:13:46.016 ] 00:13:46.016 }' 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.016 16:39:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.277 [2024-12-07 16:39:45.109405] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.277 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.536 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.536 "name": "Existed_Raid", 00:13:46.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.536 "strip_size_kb": 64, 00:13:46.536 "state": "configuring", 00:13:46.536 "raid_level": "raid5f", 00:13:46.536 "superblock": false, 00:13:46.536 "num_base_bdevs": 3, 00:13:46.536 "num_base_bdevs_discovered": 1, 00:13:46.536 "num_base_bdevs_operational": 3, 00:13:46.536 "base_bdevs_list": [ 00:13:46.536 { 00:13:46.536 "name": null, 00:13:46.536 "uuid": "56cbf07e-c645-42c6-a965-732acffc585a", 00:13:46.536 "is_configured": false, 00:13:46.536 "data_offset": 0, 00:13:46.536 "data_size": 65536 00:13:46.536 }, 00:13:46.536 { 00:13:46.536 "name": null, 00:13:46.536 "uuid": "86e340c1-24a9-4295-8938-8d0c79dbf613", 00:13:46.536 "is_configured": false, 00:13:46.536 "data_offset": 0, 00:13:46.536 "data_size": 65536 00:13:46.536 }, 00:13:46.536 { 00:13:46.536 "name": "BaseBdev3", 00:13:46.536 "uuid": "cae4016c-fbcf-4c3f-afde-877e61781814", 00:13:46.536 "is_configured": true, 00:13:46.536 "data_offset": 0, 00:13:46.536 "data_size": 65536 00:13:46.536 } 00:13:46.536 ] 00:13:46.536 }' 00:13:46.536 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.536 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.795 [2024-12-07 16:39:45.640180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.795 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.054 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.054 "name": "Existed_Raid", 00:13:47.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.054 "strip_size_kb": 64, 00:13:47.054 "state": "configuring", 00:13:47.054 "raid_level": "raid5f", 00:13:47.054 "superblock": false, 00:13:47.054 "num_base_bdevs": 3, 00:13:47.054 "num_base_bdevs_discovered": 2, 00:13:47.054 "num_base_bdevs_operational": 3, 00:13:47.054 "base_bdevs_list": [ 00:13:47.054 { 00:13:47.054 "name": null, 00:13:47.054 "uuid": "56cbf07e-c645-42c6-a965-732acffc585a", 00:13:47.054 "is_configured": false, 00:13:47.054 "data_offset": 0, 00:13:47.054 "data_size": 65536 00:13:47.054 }, 00:13:47.054 { 00:13:47.054 "name": "BaseBdev2", 00:13:47.054 "uuid": "86e340c1-24a9-4295-8938-8d0c79dbf613", 00:13:47.054 "is_configured": true, 00:13:47.054 "data_offset": 0, 00:13:47.054 "data_size": 65536 00:13:47.054 }, 00:13:47.054 { 00:13:47.054 "name": "BaseBdev3", 00:13:47.054 "uuid": "cae4016c-fbcf-4c3f-afde-877e61781814", 00:13:47.054 "is_configured": true, 00:13:47.054 "data_offset": 0, 00:13:47.054 "data_size": 65536 00:13:47.054 } 00:13:47.054 ] 00:13:47.054 }' 00:13:47.054 16:39:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.054 16:39:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 56cbf07e-c645-42c6-a965-732acffc585a 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.312 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.571 [2024-12-07 16:39:46.216201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:47.571 [2024-12-07 16:39:46.216313] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:47.571 [2024-12-07 16:39:46.216352] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:47.571 [2024-12-07 16:39:46.216688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:47.571 [2024-12-07 16:39:46.217187] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:47.571 [2024-12-07 16:39:46.217231] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:47.571 [2024-12-07 16:39:46.217488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.571 NewBaseBdev 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.571 [ 00:13:47.571 { 00:13:47.571 "name": "NewBaseBdev", 00:13:47.571 "aliases": [ 00:13:47.571 "56cbf07e-c645-42c6-a965-732acffc585a" 00:13:47.571 ], 00:13:47.571 "product_name": "Malloc disk", 00:13:47.571 "block_size": 512, 00:13:47.571 "num_blocks": 65536, 00:13:47.571 "uuid": "56cbf07e-c645-42c6-a965-732acffc585a", 00:13:47.571 "assigned_rate_limits": { 00:13:47.571 "rw_ios_per_sec": 0, 00:13:47.571 "rw_mbytes_per_sec": 0, 00:13:47.571 "r_mbytes_per_sec": 0, 00:13:47.571 "w_mbytes_per_sec": 0 00:13:47.571 }, 00:13:47.571 "claimed": true, 00:13:47.571 "claim_type": "exclusive_write", 00:13:47.571 "zoned": false, 00:13:47.571 "supported_io_types": { 00:13:47.571 "read": true, 00:13:47.571 "write": true, 00:13:47.571 "unmap": true, 00:13:47.571 "flush": true, 00:13:47.571 "reset": true, 00:13:47.571 "nvme_admin": false, 00:13:47.571 "nvme_io": false, 00:13:47.571 "nvme_io_md": false, 00:13:47.571 "write_zeroes": true, 00:13:47.571 "zcopy": true, 00:13:47.571 "get_zone_info": false, 00:13:47.571 "zone_management": false, 00:13:47.571 "zone_append": false, 00:13:47.571 "compare": false, 00:13:47.571 "compare_and_write": false, 00:13:47.571 "abort": true, 00:13:47.571 "seek_hole": false, 00:13:47.571 "seek_data": false, 00:13:47.571 "copy": true, 00:13:47.571 "nvme_iov_md": false 00:13:47.571 }, 00:13:47.571 "memory_domains": [ 00:13:47.571 { 00:13:47.571 "dma_device_id": "system", 00:13:47.571 "dma_device_type": 1 00:13:47.571 }, 00:13:47.571 { 00:13:47.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.571 "dma_device_type": 2 00:13:47.571 } 00:13:47.571 ], 00:13:47.571 "driver_specific": {} 00:13:47.571 } 00:13:47.571 ] 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.571 "name": "Existed_Raid", 00:13:47.571 "uuid": "427d6c6f-5a32-4aa5-927a-49c918d39898", 00:13:47.571 "strip_size_kb": 64, 00:13:47.571 "state": "online", 00:13:47.571 "raid_level": "raid5f", 00:13:47.571 "superblock": false, 00:13:47.571 "num_base_bdevs": 3, 00:13:47.571 "num_base_bdevs_discovered": 3, 00:13:47.571 "num_base_bdevs_operational": 3, 00:13:47.571 "base_bdevs_list": [ 00:13:47.571 { 00:13:47.571 "name": "NewBaseBdev", 00:13:47.571 "uuid": "56cbf07e-c645-42c6-a965-732acffc585a", 00:13:47.571 "is_configured": true, 00:13:47.571 "data_offset": 0, 00:13:47.571 "data_size": 65536 00:13:47.571 }, 00:13:47.571 { 00:13:47.571 "name": "BaseBdev2", 00:13:47.571 "uuid": "86e340c1-24a9-4295-8938-8d0c79dbf613", 00:13:47.571 "is_configured": true, 00:13:47.571 "data_offset": 0, 00:13:47.571 "data_size": 65536 00:13:47.571 }, 00:13:47.571 { 00:13:47.571 "name": "BaseBdev3", 00:13:47.571 "uuid": "cae4016c-fbcf-4c3f-afde-877e61781814", 00:13:47.571 "is_configured": true, 00:13:47.571 "data_offset": 0, 00:13:47.571 "data_size": 65536 00:13:47.571 } 00:13:47.571 ] 00:13:47.571 }' 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.571 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.830 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:47.830 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:47.830 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:47.830 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:47.830 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:47.830 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:47.830 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:47.830 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:47.830 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.830 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.830 [2024-12-07 16:39:46.691616] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.830 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.830 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:47.830 "name": "Existed_Raid", 00:13:47.830 "aliases": [ 00:13:47.830 "427d6c6f-5a32-4aa5-927a-49c918d39898" 00:13:47.830 ], 00:13:47.830 "product_name": "Raid Volume", 00:13:47.830 "block_size": 512, 00:13:47.830 "num_blocks": 131072, 00:13:47.830 "uuid": "427d6c6f-5a32-4aa5-927a-49c918d39898", 00:13:47.830 "assigned_rate_limits": { 00:13:47.830 "rw_ios_per_sec": 0, 00:13:47.830 "rw_mbytes_per_sec": 0, 00:13:47.830 "r_mbytes_per_sec": 0, 00:13:47.830 "w_mbytes_per_sec": 0 00:13:47.830 }, 00:13:47.830 "claimed": false, 00:13:47.830 "zoned": false, 00:13:47.830 "supported_io_types": { 00:13:47.830 "read": true, 00:13:47.830 "write": true, 00:13:47.830 "unmap": false, 00:13:47.830 "flush": false, 00:13:47.830 "reset": true, 00:13:47.830 "nvme_admin": false, 00:13:47.830 "nvme_io": false, 00:13:47.830 "nvme_io_md": false, 00:13:47.830 "write_zeroes": true, 00:13:47.830 "zcopy": false, 00:13:47.830 "get_zone_info": false, 00:13:47.830 "zone_management": false, 00:13:47.830 "zone_append": false, 00:13:47.830 "compare": false, 00:13:47.830 "compare_and_write": false, 00:13:47.830 "abort": false, 00:13:47.830 "seek_hole": false, 00:13:47.830 "seek_data": false, 00:13:47.830 "copy": false, 00:13:47.830 "nvme_iov_md": false 00:13:47.830 }, 00:13:47.830 "driver_specific": { 00:13:47.830 "raid": { 00:13:47.830 "uuid": "427d6c6f-5a32-4aa5-927a-49c918d39898", 00:13:47.830 "strip_size_kb": 64, 00:13:47.830 "state": "online", 00:13:47.830 "raid_level": "raid5f", 00:13:47.830 "superblock": false, 00:13:47.830 "num_base_bdevs": 3, 00:13:47.830 "num_base_bdevs_discovered": 3, 00:13:47.830 "num_base_bdevs_operational": 3, 00:13:47.830 "base_bdevs_list": [ 00:13:47.830 { 00:13:47.830 "name": "NewBaseBdev", 00:13:47.830 "uuid": "56cbf07e-c645-42c6-a965-732acffc585a", 00:13:47.830 "is_configured": true, 00:13:47.830 "data_offset": 0, 00:13:47.830 "data_size": 65536 00:13:47.830 }, 00:13:47.830 { 00:13:47.830 "name": "BaseBdev2", 00:13:47.830 "uuid": "86e340c1-24a9-4295-8938-8d0c79dbf613", 00:13:47.830 "is_configured": true, 00:13:47.830 "data_offset": 0, 00:13:47.830 "data_size": 65536 00:13:47.830 }, 00:13:47.830 { 00:13:47.830 "name": "BaseBdev3", 00:13:47.830 "uuid": "cae4016c-fbcf-4c3f-afde-877e61781814", 00:13:47.830 "is_configured": true, 00:13:47.830 "data_offset": 0, 00:13:47.830 "data_size": 65536 00:13:47.830 } 00:13:47.830 ] 00:13:47.830 } 00:13:47.830 } 00:13:47.830 }' 00:13:47.830 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:48.089 BaseBdev2 00:13:48.089 BaseBdev3' 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.089 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.089 [2024-12-07 16:39:46.947275] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.089 [2024-12-07 16:39:46.947373] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.090 [2024-12-07 16:39:46.947477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.090 [2024-12-07 16:39:46.947777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.090 [2024-12-07 16:39:46.947841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:48.090 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.090 16:39:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90775 00:13:48.090 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90775 ']' 00:13:48.090 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90775 00:13:48.090 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:48.090 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:48.090 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90775 00:13:48.428 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:48.428 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:48.428 killing process with pid 90775 00:13:48.428 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90775' 00:13:48.428 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90775 00:13:48.428 [2024-12-07 16:39:46.993089] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.428 16:39:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90775 00:13:48.428 [2024-12-07 16:39:47.050650] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:48.706 00:13:48.706 real 0m9.151s 00:13:48.706 user 0m15.218s 00:13:48.706 sys 0m2.062s 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.706 ************************************ 00:13:48.706 END TEST raid5f_state_function_test 00:13:48.706 ************************************ 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.706 16:39:47 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:48.706 16:39:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:48.706 16:39:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.706 16:39:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:48.706 ************************************ 00:13:48.706 START TEST raid5f_state_function_test_sb 00:13:48.706 ************************************ 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.706 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:48.707 Process raid pid: 91380 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91380 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91380' 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91380 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91380 ']' 00:13:48.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.707 16:39:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.966 [2024-12-07 16:39:47.605942] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:48.966 [2024-12-07 16:39:47.606076] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.966 [2024-12-07 16:39:47.772399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.966 [2024-12-07 16:39:47.840925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.226 [2024-12-07 16:39:47.916361] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.226 [2024-12-07 16:39:47.916400] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.793 [2024-12-07 16:39:48.427431] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:49.793 [2024-12-07 16:39:48.427532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:49.793 [2024-12-07 16:39:48.427590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:49.793 [2024-12-07 16:39:48.427618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:49.793 [2024-12-07 16:39:48.427637] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:49.793 [2024-12-07 16:39:48.427672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.793 "name": "Existed_Raid", 00:13:49.793 "uuid": "1a72877c-6843-426d-ad46-c6d1714b705d", 00:13:49.793 "strip_size_kb": 64, 00:13:49.793 "state": "configuring", 00:13:49.793 "raid_level": "raid5f", 00:13:49.793 "superblock": true, 00:13:49.793 "num_base_bdevs": 3, 00:13:49.793 "num_base_bdevs_discovered": 0, 00:13:49.793 "num_base_bdevs_operational": 3, 00:13:49.793 "base_bdevs_list": [ 00:13:49.793 { 00:13:49.793 "name": "BaseBdev1", 00:13:49.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.793 "is_configured": false, 00:13:49.793 "data_offset": 0, 00:13:49.793 "data_size": 0 00:13:49.793 }, 00:13:49.793 { 00:13:49.793 "name": "BaseBdev2", 00:13:49.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.793 "is_configured": false, 00:13:49.793 "data_offset": 0, 00:13:49.793 "data_size": 0 00:13:49.793 }, 00:13:49.793 { 00:13:49.793 "name": "BaseBdev3", 00:13:49.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.793 "is_configured": false, 00:13:49.793 "data_offset": 0, 00:13:49.793 "data_size": 0 00:13:49.793 } 00:13:49.793 ] 00:13:49.793 }' 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.793 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.053 [2024-12-07 16:39:48.874502] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.053 [2024-12-07 16:39:48.874584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.053 [2024-12-07 16:39:48.886514] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.053 [2024-12-07 16:39:48.886589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.053 [2024-12-07 16:39:48.886615] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.053 [2024-12-07 16:39:48.886637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.053 [2024-12-07 16:39:48.886653] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.053 [2024-12-07 16:39:48.886672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.053 [2024-12-07 16:39:48.913445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.053 BaseBdev1 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.053 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.053 [ 00:13:50.053 { 00:13:50.053 "name": "BaseBdev1", 00:13:50.053 "aliases": [ 00:13:50.053 "06f97c21-01da-4025-85e1-ea92dd176e14" 00:13:50.053 ], 00:13:50.053 "product_name": "Malloc disk", 00:13:50.053 "block_size": 512, 00:13:50.053 "num_blocks": 65536, 00:13:50.053 "uuid": "06f97c21-01da-4025-85e1-ea92dd176e14", 00:13:50.053 "assigned_rate_limits": { 00:13:50.053 "rw_ios_per_sec": 0, 00:13:50.053 "rw_mbytes_per_sec": 0, 00:13:50.053 "r_mbytes_per_sec": 0, 00:13:50.053 "w_mbytes_per_sec": 0 00:13:50.053 }, 00:13:50.053 "claimed": true, 00:13:50.053 "claim_type": "exclusive_write", 00:13:50.053 "zoned": false, 00:13:50.053 "supported_io_types": { 00:13:50.053 "read": true, 00:13:50.053 "write": true, 00:13:50.053 "unmap": true, 00:13:50.053 "flush": true, 00:13:50.053 "reset": true, 00:13:50.053 "nvme_admin": false, 00:13:50.053 "nvme_io": false, 00:13:50.053 "nvme_io_md": false, 00:13:50.053 "write_zeroes": true, 00:13:50.053 "zcopy": true, 00:13:50.053 "get_zone_info": false, 00:13:50.053 "zone_management": false, 00:13:50.053 "zone_append": false, 00:13:50.053 "compare": false, 00:13:50.053 "compare_and_write": false, 00:13:50.053 "abort": true, 00:13:50.053 "seek_hole": false, 00:13:50.053 "seek_data": false, 00:13:50.053 "copy": true, 00:13:50.053 "nvme_iov_md": false 00:13:50.053 }, 00:13:50.053 "memory_domains": [ 00:13:50.053 { 00:13:50.053 "dma_device_id": "system", 00:13:50.053 "dma_device_type": 1 00:13:50.053 }, 00:13:50.053 { 00:13:50.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.053 "dma_device_type": 2 00:13:50.053 } 00:13:50.053 ], 00:13:50.053 "driver_specific": {} 00:13:50.053 } 00:13:50.053 ] 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.313 16:39:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.313 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.313 "name": "Existed_Raid", 00:13:50.313 "uuid": "e1eb8dd5-1c29-4025-af5f-65ddfd9f6d65", 00:13:50.313 "strip_size_kb": 64, 00:13:50.313 "state": "configuring", 00:13:50.313 "raid_level": "raid5f", 00:13:50.313 "superblock": true, 00:13:50.313 "num_base_bdevs": 3, 00:13:50.313 "num_base_bdevs_discovered": 1, 00:13:50.313 "num_base_bdevs_operational": 3, 00:13:50.313 "base_bdevs_list": [ 00:13:50.313 { 00:13:50.313 "name": "BaseBdev1", 00:13:50.313 "uuid": "06f97c21-01da-4025-85e1-ea92dd176e14", 00:13:50.313 "is_configured": true, 00:13:50.313 "data_offset": 2048, 00:13:50.313 "data_size": 63488 00:13:50.313 }, 00:13:50.313 { 00:13:50.313 "name": "BaseBdev2", 00:13:50.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.313 "is_configured": false, 00:13:50.313 "data_offset": 0, 00:13:50.313 "data_size": 0 00:13:50.313 }, 00:13:50.313 { 00:13:50.313 "name": "BaseBdev3", 00:13:50.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.313 "is_configured": false, 00:13:50.313 "data_offset": 0, 00:13:50.313 "data_size": 0 00:13:50.313 } 00:13:50.313 ] 00:13:50.313 }' 00:13:50.313 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.313 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.573 [2024-12-07 16:39:49.380653] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.573 [2024-12-07 16:39:49.380730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.573 [2024-12-07 16:39:49.392698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.573 [2024-12-07 16:39:49.394845] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.573 [2024-12-07 16:39:49.394912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.573 [2024-12-07 16:39:49.394925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.573 [2024-12-07 16:39:49.394935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.573 "name": "Existed_Raid", 00:13:50.573 "uuid": "3c00c59d-e0ac-4be3-9957-828de72d9020", 00:13:50.573 "strip_size_kb": 64, 00:13:50.573 "state": "configuring", 00:13:50.573 "raid_level": "raid5f", 00:13:50.573 "superblock": true, 00:13:50.573 "num_base_bdevs": 3, 00:13:50.573 "num_base_bdevs_discovered": 1, 00:13:50.573 "num_base_bdevs_operational": 3, 00:13:50.573 "base_bdevs_list": [ 00:13:50.573 { 00:13:50.573 "name": "BaseBdev1", 00:13:50.573 "uuid": "06f97c21-01da-4025-85e1-ea92dd176e14", 00:13:50.573 "is_configured": true, 00:13:50.573 "data_offset": 2048, 00:13:50.573 "data_size": 63488 00:13:50.573 }, 00:13:50.573 { 00:13:50.573 "name": "BaseBdev2", 00:13:50.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.573 "is_configured": false, 00:13:50.573 "data_offset": 0, 00:13:50.573 "data_size": 0 00:13:50.573 }, 00:13:50.573 { 00:13:50.573 "name": "BaseBdev3", 00:13:50.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.573 "is_configured": false, 00:13:50.573 "data_offset": 0, 00:13:50.573 "data_size": 0 00:13:50.573 } 00:13:50.573 ] 00:13:50.573 }' 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.573 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.143 [2024-12-07 16:39:49.818494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.143 BaseBdev2 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.143 [ 00:13:51.143 { 00:13:51.143 "name": "BaseBdev2", 00:13:51.143 "aliases": [ 00:13:51.143 "4c55a3d1-cd41-460c-8a8c-8a8900115596" 00:13:51.143 ], 00:13:51.143 "product_name": "Malloc disk", 00:13:51.143 "block_size": 512, 00:13:51.143 "num_blocks": 65536, 00:13:51.143 "uuid": "4c55a3d1-cd41-460c-8a8c-8a8900115596", 00:13:51.143 "assigned_rate_limits": { 00:13:51.143 "rw_ios_per_sec": 0, 00:13:51.143 "rw_mbytes_per_sec": 0, 00:13:51.143 "r_mbytes_per_sec": 0, 00:13:51.143 "w_mbytes_per_sec": 0 00:13:51.143 }, 00:13:51.143 "claimed": true, 00:13:51.143 "claim_type": "exclusive_write", 00:13:51.143 "zoned": false, 00:13:51.143 "supported_io_types": { 00:13:51.143 "read": true, 00:13:51.143 "write": true, 00:13:51.143 "unmap": true, 00:13:51.143 "flush": true, 00:13:51.143 "reset": true, 00:13:51.143 "nvme_admin": false, 00:13:51.143 "nvme_io": false, 00:13:51.143 "nvme_io_md": false, 00:13:51.143 "write_zeroes": true, 00:13:51.143 "zcopy": true, 00:13:51.143 "get_zone_info": false, 00:13:51.143 "zone_management": false, 00:13:51.143 "zone_append": false, 00:13:51.143 "compare": false, 00:13:51.143 "compare_and_write": false, 00:13:51.143 "abort": true, 00:13:51.143 "seek_hole": false, 00:13:51.143 "seek_data": false, 00:13:51.143 "copy": true, 00:13:51.143 "nvme_iov_md": false 00:13:51.143 }, 00:13:51.143 "memory_domains": [ 00:13:51.143 { 00:13:51.143 "dma_device_id": "system", 00:13:51.143 "dma_device_type": 1 00:13:51.143 }, 00:13:51.143 { 00:13:51.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.143 "dma_device_type": 2 00:13:51.143 } 00:13:51.143 ], 00:13:51.143 "driver_specific": {} 00:13:51.143 } 00:13:51.143 ] 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.143 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.143 "name": "Existed_Raid", 00:13:51.143 "uuid": "3c00c59d-e0ac-4be3-9957-828de72d9020", 00:13:51.143 "strip_size_kb": 64, 00:13:51.143 "state": "configuring", 00:13:51.143 "raid_level": "raid5f", 00:13:51.143 "superblock": true, 00:13:51.143 "num_base_bdevs": 3, 00:13:51.143 "num_base_bdevs_discovered": 2, 00:13:51.143 "num_base_bdevs_operational": 3, 00:13:51.143 "base_bdevs_list": [ 00:13:51.143 { 00:13:51.143 "name": "BaseBdev1", 00:13:51.143 "uuid": "06f97c21-01da-4025-85e1-ea92dd176e14", 00:13:51.143 "is_configured": true, 00:13:51.143 "data_offset": 2048, 00:13:51.143 "data_size": 63488 00:13:51.144 }, 00:13:51.144 { 00:13:51.144 "name": "BaseBdev2", 00:13:51.144 "uuid": "4c55a3d1-cd41-460c-8a8c-8a8900115596", 00:13:51.144 "is_configured": true, 00:13:51.144 "data_offset": 2048, 00:13:51.144 "data_size": 63488 00:13:51.144 }, 00:13:51.144 { 00:13:51.144 "name": "BaseBdev3", 00:13:51.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.144 "is_configured": false, 00:13:51.144 "data_offset": 0, 00:13:51.144 "data_size": 0 00:13:51.144 } 00:13:51.144 ] 00:13:51.144 }' 00:13:51.144 16:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.144 16:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.403 [2024-12-07 16:39:50.274372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.403 [2024-12-07 16:39:50.274685] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:51.403 [2024-12-07 16:39:50.274742] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:51.403 [2024-12-07 16:39:50.275089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:51.403 BaseBdev3 00:13:51.403 [2024-12-07 16:39:50.275620] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:51.403 [2024-12-07 16:39:50.275670] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:51.403 [2024-12-07 16:39:50.275823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.403 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.403 [ 00:13:51.403 { 00:13:51.403 "name": "BaseBdev3", 00:13:51.403 "aliases": [ 00:13:51.403 "8a5dbbec-c694-4f9e-8641-761613ac6129" 00:13:51.403 ], 00:13:51.663 "product_name": "Malloc disk", 00:13:51.663 "block_size": 512, 00:13:51.663 "num_blocks": 65536, 00:13:51.663 "uuid": "8a5dbbec-c694-4f9e-8641-761613ac6129", 00:13:51.663 "assigned_rate_limits": { 00:13:51.663 "rw_ios_per_sec": 0, 00:13:51.663 "rw_mbytes_per_sec": 0, 00:13:51.663 "r_mbytes_per_sec": 0, 00:13:51.663 "w_mbytes_per_sec": 0 00:13:51.663 }, 00:13:51.663 "claimed": true, 00:13:51.663 "claim_type": "exclusive_write", 00:13:51.663 "zoned": false, 00:13:51.663 "supported_io_types": { 00:13:51.663 "read": true, 00:13:51.663 "write": true, 00:13:51.663 "unmap": true, 00:13:51.663 "flush": true, 00:13:51.663 "reset": true, 00:13:51.663 "nvme_admin": false, 00:13:51.663 "nvme_io": false, 00:13:51.663 "nvme_io_md": false, 00:13:51.663 "write_zeroes": true, 00:13:51.663 "zcopy": true, 00:13:51.663 "get_zone_info": false, 00:13:51.663 "zone_management": false, 00:13:51.663 "zone_append": false, 00:13:51.663 "compare": false, 00:13:51.663 "compare_and_write": false, 00:13:51.663 "abort": true, 00:13:51.663 "seek_hole": false, 00:13:51.663 "seek_data": false, 00:13:51.663 "copy": true, 00:13:51.663 "nvme_iov_md": false 00:13:51.663 }, 00:13:51.663 "memory_domains": [ 00:13:51.663 { 00:13:51.663 "dma_device_id": "system", 00:13:51.663 "dma_device_type": 1 00:13:51.663 }, 00:13:51.663 { 00:13:51.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.663 "dma_device_type": 2 00:13:51.663 } 00:13:51.663 ], 00:13:51.663 "driver_specific": {} 00:13:51.663 } 00:13:51.663 ] 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.663 "name": "Existed_Raid", 00:13:51.663 "uuid": "3c00c59d-e0ac-4be3-9957-828de72d9020", 00:13:51.663 "strip_size_kb": 64, 00:13:51.663 "state": "online", 00:13:51.663 "raid_level": "raid5f", 00:13:51.663 "superblock": true, 00:13:51.663 "num_base_bdevs": 3, 00:13:51.663 "num_base_bdevs_discovered": 3, 00:13:51.663 "num_base_bdevs_operational": 3, 00:13:51.663 "base_bdevs_list": [ 00:13:51.663 { 00:13:51.663 "name": "BaseBdev1", 00:13:51.663 "uuid": "06f97c21-01da-4025-85e1-ea92dd176e14", 00:13:51.663 "is_configured": true, 00:13:51.663 "data_offset": 2048, 00:13:51.663 "data_size": 63488 00:13:51.663 }, 00:13:51.663 { 00:13:51.663 "name": "BaseBdev2", 00:13:51.663 "uuid": "4c55a3d1-cd41-460c-8a8c-8a8900115596", 00:13:51.663 "is_configured": true, 00:13:51.663 "data_offset": 2048, 00:13:51.663 "data_size": 63488 00:13:51.663 }, 00:13:51.663 { 00:13:51.663 "name": "BaseBdev3", 00:13:51.663 "uuid": "8a5dbbec-c694-4f9e-8641-761613ac6129", 00:13:51.663 "is_configured": true, 00:13:51.663 "data_offset": 2048, 00:13:51.663 "data_size": 63488 00:13:51.663 } 00:13:51.663 ] 00:13:51.663 }' 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.663 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.923 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:51.923 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:51.923 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:51.923 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:51.923 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:51.923 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:51.923 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:51.923 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.923 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.923 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:51.923 [2024-12-07 16:39:50.785690] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.923 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:52.184 "name": "Existed_Raid", 00:13:52.184 "aliases": [ 00:13:52.184 "3c00c59d-e0ac-4be3-9957-828de72d9020" 00:13:52.184 ], 00:13:52.184 "product_name": "Raid Volume", 00:13:52.184 "block_size": 512, 00:13:52.184 "num_blocks": 126976, 00:13:52.184 "uuid": "3c00c59d-e0ac-4be3-9957-828de72d9020", 00:13:52.184 "assigned_rate_limits": { 00:13:52.184 "rw_ios_per_sec": 0, 00:13:52.184 "rw_mbytes_per_sec": 0, 00:13:52.184 "r_mbytes_per_sec": 0, 00:13:52.184 "w_mbytes_per_sec": 0 00:13:52.184 }, 00:13:52.184 "claimed": false, 00:13:52.184 "zoned": false, 00:13:52.184 "supported_io_types": { 00:13:52.184 "read": true, 00:13:52.184 "write": true, 00:13:52.184 "unmap": false, 00:13:52.184 "flush": false, 00:13:52.184 "reset": true, 00:13:52.184 "nvme_admin": false, 00:13:52.184 "nvme_io": false, 00:13:52.184 "nvme_io_md": false, 00:13:52.184 "write_zeroes": true, 00:13:52.184 "zcopy": false, 00:13:52.184 "get_zone_info": false, 00:13:52.184 "zone_management": false, 00:13:52.184 "zone_append": false, 00:13:52.184 "compare": false, 00:13:52.184 "compare_and_write": false, 00:13:52.184 "abort": false, 00:13:52.184 "seek_hole": false, 00:13:52.184 "seek_data": false, 00:13:52.184 "copy": false, 00:13:52.184 "nvme_iov_md": false 00:13:52.184 }, 00:13:52.184 "driver_specific": { 00:13:52.184 "raid": { 00:13:52.184 "uuid": "3c00c59d-e0ac-4be3-9957-828de72d9020", 00:13:52.184 "strip_size_kb": 64, 00:13:52.184 "state": "online", 00:13:52.184 "raid_level": "raid5f", 00:13:52.184 "superblock": true, 00:13:52.184 "num_base_bdevs": 3, 00:13:52.184 "num_base_bdevs_discovered": 3, 00:13:52.184 "num_base_bdevs_operational": 3, 00:13:52.184 "base_bdevs_list": [ 00:13:52.184 { 00:13:52.184 "name": "BaseBdev1", 00:13:52.184 "uuid": "06f97c21-01da-4025-85e1-ea92dd176e14", 00:13:52.184 "is_configured": true, 00:13:52.184 "data_offset": 2048, 00:13:52.184 "data_size": 63488 00:13:52.184 }, 00:13:52.184 { 00:13:52.184 "name": "BaseBdev2", 00:13:52.184 "uuid": "4c55a3d1-cd41-460c-8a8c-8a8900115596", 00:13:52.184 "is_configured": true, 00:13:52.184 "data_offset": 2048, 00:13:52.184 "data_size": 63488 00:13:52.184 }, 00:13:52.184 { 00:13:52.184 "name": "BaseBdev3", 00:13:52.184 "uuid": "8a5dbbec-c694-4f9e-8641-761613ac6129", 00:13:52.184 "is_configured": true, 00:13:52.184 "data_offset": 2048, 00:13:52.184 "data_size": 63488 00:13:52.184 } 00:13:52.184 ] 00:13:52.184 } 00:13:52.184 } 00:13:52.184 }' 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:52.184 BaseBdev2 00:13:52.184 BaseBdev3' 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.184 16:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:52.184 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:52.184 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:52.184 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.184 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:52.184 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.184 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.184 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.184 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:52.184 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:52.184 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:52.184 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.184 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.185 [2024-12-07 16:39:51.045106] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.185 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.444 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.444 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.444 "name": "Existed_Raid", 00:13:52.444 "uuid": "3c00c59d-e0ac-4be3-9957-828de72d9020", 00:13:52.444 "strip_size_kb": 64, 00:13:52.444 "state": "online", 00:13:52.444 "raid_level": "raid5f", 00:13:52.444 "superblock": true, 00:13:52.444 "num_base_bdevs": 3, 00:13:52.444 "num_base_bdevs_discovered": 2, 00:13:52.444 "num_base_bdevs_operational": 2, 00:13:52.444 "base_bdevs_list": [ 00:13:52.444 { 00:13:52.444 "name": null, 00:13:52.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.444 "is_configured": false, 00:13:52.444 "data_offset": 0, 00:13:52.444 "data_size": 63488 00:13:52.444 }, 00:13:52.444 { 00:13:52.444 "name": "BaseBdev2", 00:13:52.444 "uuid": "4c55a3d1-cd41-460c-8a8c-8a8900115596", 00:13:52.444 "is_configured": true, 00:13:52.444 "data_offset": 2048, 00:13:52.444 "data_size": 63488 00:13:52.444 }, 00:13:52.444 { 00:13:52.444 "name": "BaseBdev3", 00:13:52.444 "uuid": "8a5dbbec-c694-4f9e-8641-761613ac6129", 00:13:52.444 "is_configured": true, 00:13:52.444 "data_offset": 2048, 00:13:52.444 "data_size": 63488 00:13:52.444 } 00:13:52.444 ] 00:13:52.444 }' 00:13:52.444 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.444 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.704 [2024-12-07 16:39:51.552742] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:52.704 [2024-12-07 16:39:51.552934] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.704 [2024-12-07 16:39:51.572894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.704 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.964 [2024-12-07 16:39:51.628805] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:52.964 [2024-12-07 16:39:51.628886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.964 BaseBdev2 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:52.964 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.965 [ 00:13:52.965 { 00:13:52.965 "name": "BaseBdev2", 00:13:52.965 "aliases": [ 00:13:52.965 "3160d59f-dbef-4681-b71e-fe0c737e5484" 00:13:52.965 ], 00:13:52.965 "product_name": "Malloc disk", 00:13:52.965 "block_size": 512, 00:13:52.965 "num_blocks": 65536, 00:13:52.965 "uuid": "3160d59f-dbef-4681-b71e-fe0c737e5484", 00:13:52.965 "assigned_rate_limits": { 00:13:52.965 "rw_ios_per_sec": 0, 00:13:52.965 "rw_mbytes_per_sec": 0, 00:13:52.965 "r_mbytes_per_sec": 0, 00:13:52.965 "w_mbytes_per_sec": 0 00:13:52.965 }, 00:13:52.965 "claimed": false, 00:13:52.965 "zoned": false, 00:13:52.965 "supported_io_types": { 00:13:52.965 "read": true, 00:13:52.965 "write": true, 00:13:52.965 "unmap": true, 00:13:52.965 "flush": true, 00:13:52.965 "reset": true, 00:13:52.965 "nvme_admin": false, 00:13:52.965 "nvme_io": false, 00:13:52.965 "nvme_io_md": false, 00:13:52.965 "write_zeroes": true, 00:13:52.965 "zcopy": true, 00:13:52.965 "get_zone_info": false, 00:13:52.965 "zone_management": false, 00:13:52.965 "zone_append": false, 00:13:52.965 "compare": false, 00:13:52.965 "compare_and_write": false, 00:13:52.965 "abort": true, 00:13:52.965 "seek_hole": false, 00:13:52.965 "seek_data": false, 00:13:52.965 "copy": true, 00:13:52.965 "nvme_iov_md": false 00:13:52.965 }, 00:13:52.965 "memory_domains": [ 00:13:52.965 { 00:13:52.965 "dma_device_id": "system", 00:13:52.965 "dma_device_type": 1 00:13:52.965 }, 00:13:52.965 { 00:13:52.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.965 "dma_device_type": 2 00:13:52.965 } 00:13:52.965 ], 00:13:52.965 "driver_specific": {} 00:13:52.965 } 00:13:52.965 ] 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.965 BaseBdev3 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.965 [ 00:13:52.965 { 00:13:52.965 "name": "BaseBdev3", 00:13:52.965 "aliases": [ 00:13:52.965 "dbf74e8e-e930-4d4d-b925-d73853514f77" 00:13:52.965 ], 00:13:52.965 "product_name": "Malloc disk", 00:13:52.965 "block_size": 512, 00:13:52.965 "num_blocks": 65536, 00:13:52.965 "uuid": "dbf74e8e-e930-4d4d-b925-d73853514f77", 00:13:52.965 "assigned_rate_limits": { 00:13:52.965 "rw_ios_per_sec": 0, 00:13:52.965 "rw_mbytes_per_sec": 0, 00:13:52.965 "r_mbytes_per_sec": 0, 00:13:52.965 "w_mbytes_per_sec": 0 00:13:52.965 }, 00:13:52.965 "claimed": false, 00:13:52.965 "zoned": false, 00:13:52.965 "supported_io_types": { 00:13:52.965 "read": true, 00:13:52.965 "write": true, 00:13:52.965 "unmap": true, 00:13:52.965 "flush": true, 00:13:52.965 "reset": true, 00:13:52.965 "nvme_admin": false, 00:13:52.965 "nvme_io": false, 00:13:52.965 "nvme_io_md": false, 00:13:52.965 "write_zeroes": true, 00:13:52.965 "zcopy": true, 00:13:52.965 "get_zone_info": false, 00:13:52.965 "zone_management": false, 00:13:52.965 "zone_append": false, 00:13:52.965 "compare": false, 00:13:52.965 "compare_and_write": false, 00:13:52.965 "abort": true, 00:13:52.965 "seek_hole": false, 00:13:52.965 "seek_data": false, 00:13:52.965 "copy": true, 00:13:52.965 "nvme_iov_md": false 00:13:52.965 }, 00:13:52.965 "memory_domains": [ 00:13:52.965 { 00:13:52.965 "dma_device_id": "system", 00:13:52.965 "dma_device_type": 1 00:13:52.965 }, 00:13:52.965 { 00:13:52.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.965 "dma_device_type": 2 00:13:52.965 } 00:13:52.965 ], 00:13:52.965 "driver_specific": {} 00:13:52.965 } 00:13:52.965 ] 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.965 [2024-12-07 16:39:51.821746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.965 [2024-12-07 16:39:51.821829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.965 [2024-12-07 16:39:51.821869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.965 [2024-12-07 16:39:51.824000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.965 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.224 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.224 "name": "Existed_Raid", 00:13:53.224 "uuid": "87a2fb4e-7e4b-46a1-b787-33fd3948c251", 00:13:53.224 "strip_size_kb": 64, 00:13:53.224 "state": "configuring", 00:13:53.224 "raid_level": "raid5f", 00:13:53.224 "superblock": true, 00:13:53.224 "num_base_bdevs": 3, 00:13:53.224 "num_base_bdevs_discovered": 2, 00:13:53.224 "num_base_bdevs_operational": 3, 00:13:53.224 "base_bdevs_list": [ 00:13:53.224 { 00:13:53.224 "name": "BaseBdev1", 00:13:53.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.224 "is_configured": false, 00:13:53.224 "data_offset": 0, 00:13:53.224 "data_size": 0 00:13:53.224 }, 00:13:53.224 { 00:13:53.224 "name": "BaseBdev2", 00:13:53.224 "uuid": "3160d59f-dbef-4681-b71e-fe0c737e5484", 00:13:53.224 "is_configured": true, 00:13:53.224 "data_offset": 2048, 00:13:53.224 "data_size": 63488 00:13:53.224 }, 00:13:53.224 { 00:13:53.224 "name": "BaseBdev3", 00:13:53.224 "uuid": "dbf74e8e-e930-4d4d-b925-d73853514f77", 00:13:53.224 "is_configured": true, 00:13:53.224 "data_offset": 2048, 00:13:53.224 "data_size": 63488 00:13:53.224 } 00:13:53.224 ] 00:13:53.224 }' 00:13:53.224 16:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.224 16:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.483 [2024-12-07 16:39:52.260977] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.483 "name": "Existed_Raid", 00:13:53.483 "uuid": "87a2fb4e-7e4b-46a1-b787-33fd3948c251", 00:13:53.483 "strip_size_kb": 64, 00:13:53.483 "state": "configuring", 00:13:53.483 "raid_level": "raid5f", 00:13:53.483 "superblock": true, 00:13:53.483 "num_base_bdevs": 3, 00:13:53.483 "num_base_bdevs_discovered": 1, 00:13:53.483 "num_base_bdevs_operational": 3, 00:13:53.483 "base_bdevs_list": [ 00:13:53.483 { 00:13:53.483 "name": "BaseBdev1", 00:13:53.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.483 "is_configured": false, 00:13:53.483 "data_offset": 0, 00:13:53.483 "data_size": 0 00:13:53.483 }, 00:13:53.483 { 00:13:53.483 "name": null, 00:13:53.483 "uuid": "3160d59f-dbef-4681-b71e-fe0c737e5484", 00:13:53.483 "is_configured": false, 00:13:53.483 "data_offset": 0, 00:13:53.483 "data_size": 63488 00:13:53.483 }, 00:13:53.483 { 00:13:53.483 "name": "BaseBdev3", 00:13:53.483 "uuid": "dbf74e8e-e930-4d4d-b925-d73853514f77", 00:13:53.483 "is_configured": true, 00:13:53.483 "data_offset": 2048, 00:13:53.483 "data_size": 63488 00:13:53.483 } 00:13:53.483 ] 00:13:53.483 }' 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.483 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.052 [2024-12-07 16:39:52.748904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.052 BaseBdev1 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.052 [ 00:13:54.052 { 00:13:54.052 "name": "BaseBdev1", 00:13:54.052 "aliases": [ 00:13:54.052 "475b606e-41fa-4952-ba3d-edb6466a9ae1" 00:13:54.052 ], 00:13:54.052 "product_name": "Malloc disk", 00:13:54.052 "block_size": 512, 00:13:54.052 "num_blocks": 65536, 00:13:54.052 "uuid": "475b606e-41fa-4952-ba3d-edb6466a9ae1", 00:13:54.052 "assigned_rate_limits": { 00:13:54.052 "rw_ios_per_sec": 0, 00:13:54.052 "rw_mbytes_per_sec": 0, 00:13:54.052 "r_mbytes_per_sec": 0, 00:13:54.052 "w_mbytes_per_sec": 0 00:13:54.052 }, 00:13:54.052 "claimed": true, 00:13:54.052 "claim_type": "exclusive_write", 00:13:54.052 "zoned": false, 00:13:54.052 "supported_io_types": { 00:13:54.052 "read": true, 00:13:54.052 "write": true, 00:13:54.052 "unmap": true, 00:13:54.052 "flush": true, 00:13:54.052 "reset": true, 00:13:54.052 "nvme_admin": false, 00:13:54.052 "nvme_io": false, 00:13:54.052 "nvme_io_md": false, 00:13:54.052 "write_zeroes": true, 00:13:54.052 "zcopy": true, 00:13:54.052 "get_zone_info": false, 00:13:54.052 "zone_management": false, 00:13:54.052 "zone_append": false, 00:13:54.052 "compare": false, 00:13:54.052 "compare_and_write": false, 00:13:54.052 "abort": true, 00:13:54.052 "seek_hole": false, 00:13:54.052 "seek_data": false, 00:13:54.052 "copy": true, 00:13:54.052 "nvme_iov_md": false 00:13:54.052 }, 00:13:54.052 "memory_domains": [ 00:13:54.052 { 00:13:54.052 "dma_device_id": "system", 00:13:54.052 "dma_device_type": 1 00:13:54.052 }, 00:13:54.052 { 00:13:54.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.052 "dma_device_type": 2 00:13:54.052 } 00:13:54.052 ], 00:13:54.052 "driver_specific": {} 00:13:54.052 } 00:13:54.052 ] 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.052 "name": "Existed_Raid", 00:13:54.052 "uuid": "87a2fb4e-7e4b-46a1-b787-33fd3948c251", 00:13:54.052 "strip_size_kb": 64, 00:13:54.052 "state": "configuring", 00:13:54.052 "raid_level": "raid5f", 00:13:54.052 "superblock": true, 00:13:54.052 "num_base_bdevs": 3, 00:13:54.052 "num_base_bdevs_discovered": 2, 00:13:54.052 "num_base_bdevs_operational": 3, 00:13:54.052 "base_bdevs_list": [ 00:13:54.052 { 00:13:54.052 "name": "BaseBdev1", 00:13:54.052 "uuid": "475b606e-41fa-4952-ba3d-edb6466a9ae1", 00:13:54.052 "is_configured": true, 00:13:54.052 "data_offset": 2048, 00:13:54.052 "data_size": 63488 00:13:54.052 }, 00:13:54.052 { 00:13:54.052 "name": null, 00:13:54.052 "uuid": "3160d59f-dbef-4681-b71e-fe0c737e5484", 00:13:54.052 "is_configured": false, 00:13:54.052 "data_offset": 0, 00:13:54.052 "data_size": 63488 00:13:54.052 }, 00:13:54.052 { 00:13:54.052 "name": "BaseBdev3", 00:13:54.052 "uuid": "dbf74e8e-e930-4d4d-b925-d73853514f77", 00:13:54.052 "is_configured": true, 00:13:54.052 "data_offset": 2048, 00:13:54.052 "data_size": 63488 00:13:54.052 } 00:13:54.052 ] 00:13:54.052 }' 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.052 16:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.621 [2024-12-07 16:39:53.280033] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.621 "name": "Existed_Raid", 00:13:54.621 "uuid": "87a2fb4e-7e4b-46a1-b787-33fd3948c251", 00:13:54.621 "strip_size_kb": 64, 00:13:54.621 "state": "configuring", 00:13:54.621 "raid_level": "raid5f", 00:13:54.621 "superblock": true, 00:13:54.621 "num_base_bdevs": 3, 00:13:54.621 "num_base_bdevs_discovered": 1, 00:13:54.621 "num_base_bdevs_operational": 3, 00:13:54.621 "base_bdevs_list": [ 00:13:54.621 { 00:13:54.621 "name": "BaseBdev1", 00:13:54.621 "uuid": "475b606e-41fa-4952-ba3d-edb6466a9ae1", 00:13:54.621 "is_configured": true, 00:13:54.621 "data_offset": 2048, 00:13:54.621 "data_size": 63488 00:13:54.621 }, 00:13:54.621 { 00:13:54.621 "name": null, 00:13:54.621 "uuid": "3160d59f-dbef-4681-b71e-fe0c737e5484", 00:13:54.621 "is_configured": false, 00:13:54.621 "data_offset": 0, 00:13:54.621 "data_size": 63488 00:13:54.621 }, 00:13:54.621 { 00:13:54.621 "name": null, 00:13:54.621 "uuid": "dbf74e8e-e930-4d4d-b925-d73853514f77", 00:13:54.621 "is_configured": false, 00:13:54.621 "data_offset": 0, 00:13:54.621 "data_size": 63488 00:13:54.621 } 00:13:54.621 ] 00:13:54.621 }' 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.621 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.881 [2024-12-07 16:39:53.759316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.881 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.141 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.141 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.141 "name": "Existed_Raid", 00:13:55.141 "uuid": "87a2fb4e-7e4b-46a1-b787-33fd3948c251", 00:13:55.141 "strip_size_kb": 64, 00:13:55.141 "state": "configuring", 00:13:55.141 "raid_level": "raid5f", 00:13:55.141 "superblock": true, 00:13:55.141 "num_base_bdevs": 3, 00:13:55.141 "num_base_bdevs_discovered": 2, 00:13:55.141 "num_base_bdevs_operational": 3, 00:13:55.141 "base_bdevs_list": [ 00:13:55.141 { 00:13:55.141 "name": "BaseBdev1", 00:13:55.141 "uuid": "475b606e-41fa-4952-ba3d-edb6466a9ae1", 00:13:55.141 "is_configured": true, 00:13:55.141 "data_offset": 2048, 00:13:55.141 "data_size": 63488 00:13:55.141 }, 00:13:55.141 { 00:13:55.141 "name": null, 00:13:55.141 "uuid": "3160d59f-dbef-4681-b71e-fe0c737e5484", 00:13:55.141 "is_configured": false, 00:13:55.141 "data_offset": 0, 00:13:55.141 "data_size": 63488 00:13:55.141 }, 00:13:55.141 { 00:13:55.141 "name": "BaseBdev3", 00:13:55.141 "uuid": "dbf74e8e-e930-4d4d-b925-d73853514f77", 00:13:55.141 "is_configured": true, 00:13:55.141 "data_offset": 2048, 00:13:55.141 "data_size": 63488 00:13:55.141 } 00:13:55.141 ] 00:13:55.141 }' 00:13:55.141 16:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.141 16:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.401 [2024-12-07 16:39:54.222506] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.401 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.402 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.402 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.402 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.402 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.402 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.402 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.402 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.402 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.402 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.402 "name": "Existed_Raid", 00:13:55.402 "uuid": "87a2fb4e-7e4b-46a1-b787-33fd3948c251", 00:13:55.402 "strip_size_kb": 64, 00:13:55.402 "state": "configuring", 00:13:55.402 "raid_level": "raid5f", 00:13:55.402 "superblock": true, 00:13:55.402 "num_base_bdevs": 3, 00:13:55.402 "num_base_bdevs_discovered": 1, 00:13:55.402 "num_base_bdevs_operational": 3, 00:13:55.402 "base_bdevs_list": [ 00:13:55.402 { 00:13:55.402 "name": null, 00:13:55.402 "uuid": "475b606e-41fa-4952-ba3d-edb6466a9ae1", 00:13:55.402 "is_configured": false, 00:13:55.402 "data_offset": 0, 00:13:55.402 "data_size": 63488 00:13:55.402 }, 00:13:55.402 { 00:13:55.402 "name": null, 00:13:55.402 "uuid": "3160d59f-dbef-4681-b71e-fe0c737e5484", 00:13:55.402 "is_configured": false, 00:13:55.402 "data_offset": 0, 00:13:55.402 "data_size": 63488 00:13:55.402 }, 00:13:55.402 { 00:13:55.402 "name": "BaseBdev3", 00:13:55.402 "uuid": "dbf74e8e-e930-4d4d-b925-d73853514f77", 00:13:55.402 "is_configured": true, 00:13:55.402 "data_offset": 2048, 00:13:55.402 "data_size": 63488 00:13:55.402 } 00:13:55.402 ] 00:13:55.402 }' 00:13:55.402 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.402 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.972 [2024-12-07 16:39:54.709361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.972 "name": "Existed_Raid", 00:13:55.972 "uuid": "87a2fb4e-7e4b-46a1-b787-33fd3948c251", 00:13:55.972 "strip_size_kb": 64, 00:13:55.972 "state": "configuring", 00:13:55.972 "raid_level": "raid5f", 00:13:55.972 "superblock": true, 00:13:55.972 "num_base_bdevs": 3, 00:13:55.972 "num_base_bdevs_discovered": 2, 00:13:55.972 "num_base_bdevs_operational": 3, 00:13:55.972 "base_bdevs_list": [ 00:13:55.972 { 00:13:55.972 "name": null, 00:13:55.972 "uuid": "475b606e-41fa-4952-ba3d-edb6466a9ae1", 00:13:55.972 "is_configured": false, 00:13:55.972 "data_offset": 0, 00:13:55.972 "data_size": 63488 00:13:55.972 }, 00:13:55.972 { 00:13:55.972 "name": "BaseBdev2", 00:13:55.972 "uuid": "3160d59f-dbef-4681-b71e-fe0c737e5484", 00:13:55.972 "is_configured": true, 00:13:55.972 "data_offset": 2048, 00:13:55.972 "data_size": 63488 00:13:55.972 }, 00:13:55.972 { 00:13:55.972 "name": "BaseBdev3", 00:13:55.972 "uuid": "dbf74e8e-e930-4d4d-b925-d73853514f77", 00:13:55.972 "is_configured": true, 00:13:55.972 "data_offset": 2048, 00:13:55.972 "data_size": 63488 00:13:55.972 } 00:13:55.972 ] 00:13:55.972 }' 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.972 16:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 475b606e-41fa-4952-ba3d-edb6466a9ae1 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.541 [2024-12-07 16:39:55.253041] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:56.541 [2024-12-07 16:39:55.253292] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:56.541 [2024-12-07 16:39:55.253356] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:56.541 NewBaseBdev 00:13:56.541 [2024-12-07 16:39:55.253676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:56.541 [2024-12-07 16:39:55.254157] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:56.541 [2024-12-07 16:39:55.254200] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:56.541 [2024-12-07 16:39:55.254358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.541 [ 00:13:56.541 { 00:13:56.541 "name": "NewBaseBdev", 00:13:56.541 "aliases": [ 00:13:56.541 "475b606e-41fa-4952-ba3d-edb6466a9ae1" 00:13:56.541 ], 00:13:56.541 "product_name": "Malloc disk", 00:13:56.541 "block_size": 512, 00:13:56.541 "num_blocks": 65536, 00:13:56.541 "uuid": "475b606e-41fa-4952-ba3d-edb6466a9ae1", 00:13:56.541 "assigned_rate_limits": { 00:13:56.541 "rw_ios_per_sec": 0, 00:13:56.541 "rw_mbytes_per_sec": 0, 00:13:56.541 "r_mbytes_per_sec": 0, 00:13:56.541 "w_mbytes_per_sec": 0 00:13:56.541 }, 00:13:56.541 "claimed": true, 00:13:56.541 "claim_type": "exclusive_write", 00:13:56.541 "zoned": false, 00:13:56.541 "supported_io_types": { 00:13:56.541 "read": true, 00:13:56.541 "write": true, 00:13:56.541 "unmap": true, 00:13:56.541 "flush": true, 00:13:56.541 "reset": true, 00:13:56.541 "nvme_admin": false, 00:13:56.541 "nvme_io": false, 00:13:56.541 "nvme_io_md": false, 00:13:56.541 "write_zeroes": true, 00:13:56.541 "zcopy": true, 00:13:56.541 "get_zone_info": false, 00:13:56.541 "zone_management": false, 00:13:56.541 "zone_append": false, 00:13:56.541 "compare": false, 00:13:56.541 "compare_and_write": false, 00:13:56.541 "abort": true, 00:13:56.541 "seek_hole": false, 00:13:56.541 "seek_data": false, 00:13:56.541 "copy": true, 00:13:56.541 "nvme_iov_md": false 00:13:56.541 }, 00:13:56.541 "memory_domains": [ 00:13:56.541 { 00:13:56.541 "dma_device_id": "system", 00:13:56.541 "dma_device_type": 1 00:13:56.541 }, 00:13:56.541 { 00:13:56.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.541 "dma_device_type": 2 00:13:56.541 } 00:13:56.541 ], 00:13:56.541 "driver_specific": {} 00:13:56.541 } 00:13:56.541 ] 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.541 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.542 "name": "Existed_Raid", 00:13:56.542 "uuid": "87a2fb4e-7e4b-46a1-b787-33fd3948c251", 00:13:56.542 "strip_size_kb": 64, 00:13:56.542 "state": "online", 00:13:56.542 "raid_level": "raid5f", 00:13:56.542 "superblock": true, 00:13:56.542 "num_base_bdevs": 3, 00:13:56.542 "num_base_bdevs_discovered": 3, 00:13:56.542 "num_base_bdevs_operational": 3, 00:13:56.542 "base_bdevs_list": [ 00:13:56.542 { 00:13:56.542 "name": "NewBaseBdev", 00:13:56.542 "uuid": "475b606e-41fa-4952-ba3d-edb6466a9ae1", 00:13:56.542 "is_configured": true, 00:13:56.542 "data_offset": 2048, 00:13:56.542 "data_size": 63488 00:13:56.542 }, 00:13:56.542 { 00:13:56.542 "name": "BaseBdev2", 00:13:56.542 "uuid": "3160d59f-dbef-4681-b71e-fe0c737e5484", 00:13:56.542 "is_configured": true, 00:13:56.542 "data_offset": 2048, 00:13:56.542 "data_size": 63488 00:13:56.542 }, 00:13:56.542 { 00:13:56.542 "name": "BaseBdev3", 00:13:56.542 "uuid": "dbf74e8e-e930-4d4d-b925-d73853514f77", 00:13:56.542 "is_configured": true, 00:13:56.542 "data_offset": 2048, 00:13:56.542 "data_size": 63488 00:13:56.542 } 00:13:56.542 ] 00:13:56.542 }' 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.542 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.109 [2024-12-07 16:39:55.712502] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:57.109 "name": "Existed_Raid", 00:13:57.109 "aliases": [ 00:13:57.109 "87a2fb4e-7e4b-46a1-b787-33fd3948c251" 00:13:57.109 ], 00:13:57.109 "product_name": "Raid Volume", 00:13:57.109 "block_size": 512, 00:13:57.109 "num_blocks": 126976, 00:13:57.109 "uuid": "87a2fb4e-7e4b-46a1-b787-33fd3948c251", 00:13:57.109 "assigned_rate_limits": { 00:13:57.109 "rw_ios_per_sec": 0, 00:13:57.109 "rw_mbytes_per_sec": 0, 00:13:57.109 "r_mbytes_per_sec": 0, 00:13:57.109 "w_mbytes_per_sec": 0 00:13:57.109 }, 00:13:57.109 "claimed": false, 00:13:57.109 "zoned": false, 00:13:57.109 "supported_io_types": { 00:13:57.109 "read": true, 00:13:57.109 "write": true, 00:13:57.109 "unmap": false, 00:13:57.109 "flush": false, 00:13:57.109 "reset": true, 00:13:57.109 "nvme_admin": false, 00:13:57.109 "nvme_io": false, 00:13:57.109 "nvme_io_md": false, 00:13:57.109 "write_zeroes": true, 00:13:57.109 "zcopy": false, 00:13:57.109 "get_zone_info": false, 00:13:57.109 "zone_management": false, 00:13:57.109 "zone_append": false, 00:13:57.109 "compare": false, 00:13:57.109 "compare_and_write": false, 00:13:57.109 "abort": false, 00:13:57.109 "seek_hole": false, 00:13:57.109 "seek_data": false, 00:13:57.109 "copy": false, 00:13:57.109 "nvme_iov_md": false 00:13:57.109 }, 00:13:57.109 "driver_specific": { 00:13:57.109 "raid": { 00:13:57.109 "uuid": "87a2fb4e-7e4b-46a1-b787-33fd3948c251", 00:13:57.109 "strip_size_kb": 64, 00:13:57.109 "state": "online", 00:13:57.109 "raid_level": "raid5f", 00:13:57.109 "superblock": true, 00:13:57.109 "num_base_bdevs": 3, 00:13:57.109 "num_base_bdevs_discovered": 3, 00:13:57.109 "num_base_bdevs_operational": 3, 00:13:57.109 "base_bdevs_list": [ 00:13:57.109 { 00:13:57.109 "name": "NewBaseBdev", 00:13:57.109 "uuid": "475b606e-41fa-4952-ba3d-edb6466a9ae1", 00:13:57.109 "is_configured": true, 00:13:57.109 "data_offset": 2048, 00:13:57.109 "data_size": 63488 00:13:57.109 }, 00:13:57.109 { 00:13:57.109 "name": "BaseBdev2", 00:13:57.109 "uuid": "3160d59f-dbef-4681-b71e-fe0c737e5484", 00:13:57.109 "is_configured": true, 00:13:57.109 "data_offset": 2048, 00:13:57.109 "data_size": 63488 00:13:57.109 }, 00:13:57.109 { 00:13:57.109 "name": "BaseBdev3", 00:13:57.109 "uuid": "dbf74e8e-e930-4d4d-b925-d73853514f77", 00:13:57.109 "is_configured": true, 00:13:57.109 "data_offset": 2048, 00:13:57.109 "data_size": 63488 00:13:57.109 } 00:13:57.109 ] 00:13:57.109 } 00:13:57.109 } 00:13:57.109 }' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:57.109 BaseBdev2 00:13:57.109 BaseBdev3' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.109 [2024-12-07 16:39:55.975812] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:57.109 [2024-12-07 16:39:55.975871] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.109 [2024-12-07 16:39:55.975954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.109 [2024-12-07 16:39:55.976229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.109 [2024-12-07 16:39:55.976243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91380 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91380 ']' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91380 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:57.109 16:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91380 00:13:57.368 16:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:57.368 16:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:57.368 16:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91380' 00:13:57.368 killing process with pid 91380 00:13:57.368 16:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91380 00:13:57.368 [2024-12-07 16:39:56.013575] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.368 16:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91380 00:13:57.368 [2024-12-07 16:39:56.070207] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:57.627 16:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:57.627 00:13:57.627 real 0m8.933s 00:13:57.627 user 0m14.905s 00:13:57.627 sys 0m1.996s 00:13:57.627 ************************************ 00:13:57.627 END TEST raid5f_state_function_test_sb 00:13:57.627 ************************************ 00:13:57.627 16:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.627 16:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.627 16:39:56 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:57.627 16:39:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:57.627 16:39:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:57.627 16:39:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:57.627 ************************************ 00:13:57.627 START TEST raid5f_superblock_test 00:13:57.627 ************************************ 00:13:57.627 16:39:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:13:57.627 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:57.627 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91982 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91982 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91982 ']' 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:57.628 16:39:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.887 [2024-12-07 16:39:56.607997] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:57.887 [2024-12-07 16:39:56.608206] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91982 ] 00:13:57.887 [2024-12-07 16:39:56.772987] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.147 [2024-12-07 16:39:56.841644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.148 [2024-12-07 16:39:56.916496] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.148 [2024-12-07 16:39:56.916603] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.718 malloc1 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.718 [2024-12-07 16:39:57.458266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:58.718 [2024-12-07 16:39:57.458394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.718 [2024-12-07 16:39:57.458444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:58.718 [2024-12-07 16:39:57.458517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.718 [2024-12-07 16:39:57.461072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.718 pt1 00:13:58.718 [2024-12-07 16:39:57.461141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.718 malloc2 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.718 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.718 [2024-12-07 16:39:57.507819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:58.718 [2024-12-07 16:39:57.508014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.718 [2024-12-07 16:39:57.508097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:58.719 [2024-12-07 16:39:57.508184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.719 [2024-12-07 16:39:57.513437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.719 [2024-12-07 16:39:57.513582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:58.719 pt2 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.719 malloc3 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.719 [2024-12-07 16:39:57.548289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:58.719 [2024-12-07 16:39:57.548337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.719 [2024-12-07 16:39:57.548365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:58.719 [2024-12-07 16:39:57.548377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.719 [2024-12-07 16:39:57.550729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.719 [2024-12-07 16:39:57.550803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:58.719 pt3 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.719 [2024-12-07 16:39:57.560330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:58.719 [2024-12-07 16:39:57.562459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:58.719 [2024-12-07 16:39:57.562559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:58.719 [2024-12-07 16:39:57.562755] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:58.719 [2024-12-07 16:39:57.562796] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:58.719 [2024-12-07 16:39:57.563083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:58.719 [2024-12-07 16:39:57.563584] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:58.719 [2024-12-07 16:39:57.563636] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:58.719 [2024-12-07 16:39:57.563800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.719 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.979 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.979 "name": "raid_bdev1", 00:13:58.979 "uuid": "f912d93c-76b6-4889-a143-9d059eab2be7", 00:13:58.979 "strip_size_kb": 64, 00:13:58.979 "state": "online", 00:13:58.979 "raid_level": "raid5f", 00:13:58.979 "superblock": true, 00:13:58.979 "num_base_bdevs": 3, 00:13:58.979 "num_base_bdevs_discovered": 3, 00:13:58.979 "num_base_bdevs_operational": 3, 00:13:58.979 "base_bdevs_list": [ 00:13:58.979 { 00:13:58.979 "name": "pt1", 00:13:58.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:58.979 "is_configured": true, 00:13:58.979 "data_offset": 2048, 00:13:58.979 "data_size": 63488 00:13:58.979 }, 00:13:58.979 { 00:13:58.979 "name": "pt2", 00:13:58.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:58.979 "is_configured": true, 00:13:58.979 "data_offset": 2048, 00:13:58.979 "data_size": 63488 00:13:58.979 }, 00:13:58.979 { 00:13:58.979 "name": "pt3", 00:13:58.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:58.979 "is_configured": true, 00:13:58.979 "data_offset": 2048, 00:13:58.979 "data_size": 63488 00:13:58.979 } 00:13:58.979 ] 00:13:58.979 }' 00:13:58.979 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.979 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.239 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:59.239 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:59.239 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:59.239 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:59.239 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:59.239 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:59.239 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.239 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.239 16:39:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:59.239 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.239 [2024-12-07 16:39:57.973508] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.239 16:39:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.239 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:59.239 "name": "raid_bdev1", 00:13:59.239 "aliases": [ 00:13:59.239 "f912d93c-76b6-4889-a143-9d059eab2be7" 00:13:59.239 ], 00:13:59.239 "product_name": "Raid Volume", 00:13:59.239 "block_size": 512, 00:13:59.239 "num_blocks": 126976, 00:13:59.239 "uuid": "f912d93c-76b6-4889-a143-9d059eab2be7", 00:13:59.239 "assigned_rate_limits": { 00:13:59.239 "rw_ios_per_sec": 0, 00:13:59.239 "rw_mbytes_per_sec": 0, 00:13:59.239 "r_mbytes_per_sec": 0, 00:13:59.239 "w_mbytes_per_sec": 0 00:13:59.239 }, 00:13:59.239 "claimed": false, 00:13:59.239 "zoned": false, 00:13:59.239 "supported_io_types": { 00:13:59.239 "read": true, 00:13:59.239 "write": true, 00:13:59.239 "unmap": false, 00:13:59.239 "flush": false, 00:13:59.239 "reset": true, 00:13:59.239 "nvme_admin": false, 00:13:59.239 "nvme_io": false, 00:13:59.239 "nvme_io_md": false, 00:13:59.239 "write_zeroes": true, 00:13:59.239 "zcopy": false, 00:13:59.239 "get_zone_info": false, 00:13:59.239 "zone_management": false, 00:13:59.239 "zone_append": false, 00:13:59.239 "compare": false, 00:13:59.239 "compare_and_write": false, 00:13:59.239 "abort": false, 00:13:59.239 "seek_hole": false, 00:13:59.239 "seek_data": false, 00:13:59.239 "copy": false, 00:13:59.239 "nvme_iov_md": false 00:13:59.239 }, 00:13:59.239 "driver_specific": { 00:13:59.239 "raid": { 00:13:59.239 "uuid": "f912d93c-76b6-4889-a143-9d059eab2be7", 00:13:59.239 "strip_size_kb": 64, 00:13:59.239 "state": "online", 00:13:59.239 "raid_level": "raid5f", 00:13:59.239 "superblock": true, 00:13:59.239 "num_base_bdevs": 3, 00:13:59.239 "num_base_bdevs_discovered": 3, 00:13:59.239 "num_base_bdevs_operational": 3, 00:13:59.239 "base_bdevs_list": [ 00:13:59.239 { 00:13:59.239 "name": "pt1", 00:13:59.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:59.239 "is_configured": true, 00:13:59.239 "data_offset": 2048, 00:13:59.239 "data_size": 63488 00:13:59.239 }, 00:13:59.239 { 00:13:59.239 "name": "pt2", 00:13:59.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:59.239 "is_configured": true, 00:13:59.239 "data_offset": 2048, 00:13:59.239 "data_size": 63488 00:13:59.239 }, 00:13:59.239 { 00:13:59.239 "name": "pt3", 00:13:59.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:59.239 "is_configured": true, 00:13:59.239 "data_offset": 2048, 00:13:59.239 "data_size": 63488 00:13:59.239 } 00:13:59.239 ] 00:13:59.239 } 00:13:59.239 } 00:13:59.239 }' 00:13:59.239 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:59.239 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:59.239 pt2 00:13:59.239 pt3' 00:13:59.239 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.239 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:59.239 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.239 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:59.239 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.239 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.239 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.239 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.499 [2024-12-07 16:39:58.260961] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f912d93c-76b6-4889-a143-9d059eab2be7 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f912d93c-76b6-4889-a143-9d059eab2be7 ']' 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.499 [2024-12-07 16:39:58.308705] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.499 [2024-12-07 16:39:58.308759] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.499 [2024-12-07 16:39:58.308874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.499 [2024-12-07 16:39:58.308963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.499 [2024-12-07 16:39:58.309010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.499 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:59.500 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.500 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.759 [2024-12-07 16:39:58.460476] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:59.759 [2024-12-07 16:39:58.462637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:59.759 [2024-12-07 16:39:58.462680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:59.759 [2024-12-07 16:39:58.462730] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:59.759 [2024-12-07 16:39:58.462768] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:59.759 [2024-12-07 16:39:58.462786] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:59.759 [2024-12-07 16:39:58.462798] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.759 [2024-12-07 16:39:58.462810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:13:59.759 request: 00:13:59.759 { 00:13:59.759 "name": "raid_bdev1", 00:13:59.759 "raid_level": "raid5f", 00:13:59.759 "base_bdevs": [ 00:13:59.759 "malloc1", 00:13:59.759 "malloc2", 00:13:59.759 "malloc3" 00:13:59.759 ], 00:13:59.759 "strip_size_kb": 64, 00:13:59.759 "superblock": false, 00:13:59.759 "method": "bdev_raid_create", 00:13:59.759 "req_id": 1 00:13:59.759 } 00:13:59.759 Got JSON-RPC error response 00:13:59.759 response: 00:13:59.759 { 00:13:59.759 "code": -17, 00:13:59.759 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:59.759 } 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.759 [2024-12-07 16:39:58.516361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:59.759 [2024-12-07 16:39:58.516436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.759 [2024-12-07 16:39:58.516468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:59.759 [2024-12-07 16:39:58.516498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.759 [2024-12-07 16:39:58.518906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.759 [2024-12-07 16:39:58.518970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:59.759 [2024-12-07 16:39:58.519057] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:59.759 [2024-12-07 16:39:58.519109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:59.759 pt1 00:13:59.759 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.760 "name": "raid_bdev1", 00:13:59.760 "uuid": "f912d93c-76b6-4889-a143-9d059eab2be7", 00:13:59.760 "strip_size_kb": 64, 00:13:59.760 "state": "configuring", 00:13:59.760 "raid_level": "raid5f", 00:13:59.760 "superblock": true, 00:13:59.760 "num_base_bdevs": 3, 00:13:59.760 "num_base_bdevs_discovered": 1, 00:13:59.760 "num_base_bdevs_operational": 3, 00:13:59.760 "base_bdevs_list": [ 00:13:59.760 { 00:13:59.760 "name": "pt1", 00:13:59.760 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:59.760 "is_configured": true, 00:13:59.760 "data_offset": 2048, 00:13:59.760 "data_size": 63488 00:13:59.760 }, 00:13:59.760 { 00:13:59.760 "name": null, 00:13:59.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:59.760 "is_configured": false, 00:13:59.760 "data_offset": 2048, 00:13:59.760 "data_size": 63488 00:13:59.760 }, 00:13:59.760 { 00:13:59.760 "name": null, 00:13:59.760 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:59.760 "is_configured": false, 00:13:59.760 "data_offset": 2048, 00:13:59.760 "data_size": 63488 00:13:59.760 } 00:13:59.760 ] 00:13:59.760 }' 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.760 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.329 [2024-12-07 16:39:58.951636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:00.329 [2024-12-07 16:39:58.951728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.329 [2024-12-07 16:39:58.951765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:00.329 [2024-12-07 16:39:58.951827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.329 [2024-12-07 16:39:58.952283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.329 [2024-12-07 16:39:58.952353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:00.329 [2024-12-07 16:39:58.952461] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:00.329 [2024-12-07 16:39:58.952493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:00.329 pt2 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.329 [2024-12-07 16:39:58.963615] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.329 16:39:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.329 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.329 "name": "raid_bdev1", 00:14:00.329 "uuid": "f912d93c-76b6-4889-a143-9d059eab2be7", 00:14:00.329 "strip_size_kb": 64, 00:14:00.329 "state": "configuring", 00:14:00.329 "raid_level": "raid5f", 00:14:00.329 "superblock": true, 00:14:00.329 "num_base_bdevs": 3, 00:14:00.329 "num_base_bdevs_discovered": 1, 00:14:00.329 "num_base_bdevs_operational": 3, 00:14:00.329 "base_bdevs_list": [ 00:14:00.329 { 00:14:00.329 "name": "pt1", 00:14:00.329 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.329 "is_configured": true, 00:14:00.329 "data_offset": 2048, 00:14:00.329 "data_size": 63488 00:14:00.329 }, 00:14:00.329 { 00:14:00.329 "name": null, 00:14:00.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.329 "is_configured": false, 00:14:00.329 "data_offset": 0, 00:14:00.329 "data_size": 63488 00:14:00.329 }, 00:14:00.329 { 00:14:00.329 "name": null, 00:14:00.329 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.329 "is_configured": false, 00:14:00.329 "data_offset": 2048, 00:14:00.329 "data_size": 63488 00:14:00.329 } 00:14:00.329 ] 00:14:00.329 }' 00:14:00.329 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.329 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.590 [2024-12-07 16:39:59.402954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:00.590 [2024-12-07 16:39:59.403039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.590 [2024-12-07 16:39:59.403073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:00.590 [2024-12-07 16:39:59.403100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.590 [2024-12-07 16:39:59.403551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.590 [2024-12-07 16:39:59.403603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:00.590 [2024-12-07 16:39:59.403692] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:00.590 [2024-12-07 16:39:59.403739] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:00.590 pt2 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.590 [2024-12-07 16:39:59.414913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:00.590 [2024-12-07 16:39:59.414983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.590 [2024-12-07 16:39:59.415015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:00.590 [2024-12-07 16:39:59.415037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.590 [2024-12-07 16:39:59.415440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.590 [2024-12-07 16:39:59.415459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:00.590 [2024-12-07 16:39:59.415513] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:00.590 [2024-12-07 16:39:59.415530] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:00.590 [2024-12-07 16:39:59.415643] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:00.590 [2024-12-07 16:39:59.415653] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:00.590 [2024-12-07 16:39:59.415900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:00.590 [2024-12-07 16:39:59.416338] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:00.590 [2024-12-07 16:39:59.416378] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:14:00.590 [2024-12-07 16:39:59.416478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.590 pt3 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.590 "name": "raid_bdev1", 00:14:00.590 "uuid": "f912d93c-76b6-4889-a143-9d059eab2be7", 00:14:00.590 "strip_size_kb": 64, 00:14:00.590 "state": "online", 00:14:00.590 "raid_level": "raid5f", 00:14:00.590 "superblock": true, 00:14:00.590 "num_base_bdevs": 3, 00:14:00.590 "num_base_bdevs_discovered": 3, 00:14:00.590 "num_base_bdevs_operational": 3, 00:14:00.590 "base_bdevs_list": [ 00:14:00.590 { 00:14:00.590 "name": "pt1", 00:14:00.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.590 "is_configured": true, 00:14:00.590 "data_offset": 2048, 00:14:00.590 "data_size": 63488 00:14:00.590 }, 00:14:00.590 { 00:14:00.590 "name": "pt2", 00:14:00.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.590 "is_configured": true, 00:14:00.590 "data_offset": 2048, 00:14:00.590 "data_size": 63488 00:14:00.590 }, 00:14:00.590 { 00:14:00.590 "name": "pt3", 00:14:00.590 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.590 "is_configured": true, 00:14:00.590 "data_offset": 2048, 00:14:00.590 "data_size": 63488 00:14:00.590 } 00:14:00.590 ] 00:14:00.590 }' 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.590 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.161 [2024-12-07 16:39:59.862377] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:01.161 "name": "raid_bdev1", 00:14:01.161 "aliases": [ 00:14:01.161 "f912d93c-76b6-4889-a143-9d059eab2be7" 00:14:01.161 ], 00:14:01.161 "product_name": "Raid Volume", 00:14:01.161 "block_size": 512, 00:14:01.161 "num_blocks": 126976, 00:14:01.161 "uuid": "f912d93c-76b6-4889-a143-9d059eab2be7", 00:14:01.161 "assigned_rate_limits": { 00:14:01.161 "rw_ios_per_sec": 0, 00:14:01.161 "rw_mbytes_per_sec": 0, 00:14:01.161 "r_mbytes_per_sec": 0, 00:14:01.161 "w_mbytes_per_sec": 0 00:14:01.161 }, 00:14:01.161 "claimed": false, 00:14:01.161 "zoned": false, 00:14:01.161 "supported_io_types": { 00:14:01.161 "read": true, 00:14:01.161 "write": true, 00:14:01.161 "unmap": false, 00:14:01.161 "flush": false, 00:14:01.161 "reset": true, 00:14:01.161 "nvme_admin": false, 00:14:01.161 "nvme_io": false, 00:14:01.161 "nvme_io_md": false, 00:14:01.161 "write_zeroes": true, 00:14:01.161 "zcopy": false, 00:14:01.161 "get_zone_info": false, 00:14:01.161 "zone_management": false, 00:14:01.161 "zone_append": false, 00:14:01.161 "compare": false, 00:14:01.161 "compare_and_write": false, 00:14:01.161 "abort": false, 00:14:01.161 "seek_hole": false, 00:14:01.161 "seek_data": false, 00:14:01.161 "copy": false, 00:14:01.161 "nvme_iov_md": false 00:14:01.161 }, 00:14:01.161 "driver_specific": { 00:14:01.161 "raid": { 00:14:01.161 "uuid": "f912d93c-76b6-4889-a143-9d059eab2be7", 00:14:01.161 "strip_size_kb": 64, 00:14:01.161 "state": "online", 00:14:01.161 "raid_level": "raid5f", 00:14:01.161 "superblock": true, 00:14:01.161 "num_base_bdevs": 3, 00:14:01.161 "num_base_bdevs_discovered": 3, 00:14:01.161 "num_base_bdevs_operational": 3, 00:14:01.161 "base_bdevs_list": [ 00:14:01.161 { 00:14:01.161 "name": "pt1", 00:14:01.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:01.161 "is_configured": true, 00:14:01.161 "data_offset": 2048, 00:14:01.161 "data_size": 63488 00:14:01.161 }, 00:14:01.161 { 00:14:01.161 "name": "pt2", 00:14:01.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.161 "is_configured": true, 00:14:01.161 "data_offset": 2048, 00:14:01.161 "data_size": 63488 00:14:01.161 }, 00:14:01.161 { 00:14:01.161 "name": "pt3", 00:14:01.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.161 "is_configured": true, 00:14:01.161 "data_offset": 2048, 00:14:01.161 "data_size": 63488 00:14:01.161 } 00:14:01.161 ] 00:14:01.161 } 00:14:01.161 } 00:14:01.161 }' 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:01.161 pt2 00:14:01.161 pt3' 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.161 16:39:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.161 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.161 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.161 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.161 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:01.161 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.161 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.161 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.161 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.422 [2024-12-07 16:40:00.121855] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f912d93c-76b6-4889-a143-9d059eab2be7 '!=' f912d93c-76b6-4889-a143-9d059eab2be7 ']' 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.422 [2024-12-07 16:40:00.169650] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.422 "name": "raid_bdev1", 00:14:01.422 "uuid": "f912d93c-76b6-4889-a143-9d059eab2be7", 00:14:01.422 "strip_size_kb": 64, 00:14:01.422 "state": "online", 00:14:01.422 "raid_level": "raid5f", 00:14:01.422 "superblock": true, 00:14:01.422 "num_base_bdevs": 3, 00:14:01.422 "num_base_bdevs_discovered": 2, 00:14:01.422 "num_base_bdevs_operational": 2, 00:14:01.422 "base_bdevs_list": [ 00:14:01.422 { 00:14:01.422 "name": null, 00:14:01.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.422 "is_configured": false, 00:14:01.422 "data_offset": 0, 00:14:01.422 "data_size": 63488 00:14:01.422 }, 00:14:01.422 { 00:14:01.422 "name": "pt2", 00:14:01.422 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.422 "is_configured": true, 00:14:01.422 "data_offset": 2048, 00:14:01.422 "data_size": 63488 00:14:01.422 }, 00:14:01.422 { 00:14:01.422 "name": "pt3", 00:14:01.422 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.422 "is_configured": true, 00:14:01.422 "data_offset": 2048, 00:14:01.422 "data_size": 63488 00:14:01.422 } 00:14:01.422 ] 00:14:01.422 }' 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.422 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.993 [2024-12-07 16:40:00.600858] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.993 [2024-12-07 16:40:00.600922] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.993 [2024-12-07 16:40:00.601010] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.993 [2024-12-07 16:40:00.601100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.993 [2024-12-07 16:40:00.601183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.993 [2024-12-07 16:40:00.676749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:01.993 [2024-12-07 16:40:00.676829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.993 [2024-12-07 16:40:00.676862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:01.993 [2024-12-07 16:40:00.676888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.993 [2024-12-07 16:40:00.679390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.993 [2024-12-07 16:40:00.679453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:01.993 [2024-12-07 16:40:00.679550] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:01.993 [2024-12-07 16:40:00.679604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:01.993 pt2 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.993 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.993 "name": "raid_bdev1", 00:14:01.994 "uuid": "f912d93c-76b6-4889-a143-9d059eab2be7", 00:14:01.994 "strip_size_kb": 64, 00:14:01.994 "state": "configuring", 00:14:01.994 "raid_level": "raid5f", 00:14:01.994 "superblock": true, 00:14:01.994 "num_base_bdevs": 3, 00:14:01.994 "num_base_bdevs_discovered": 1, 00:14:01.994 "num_base_bdevs_operational": 2, 00:14:01.994 "base_bdevs_list": [ 00:14:01.994 { 00:14:01.994 "name": null, 00:14:01.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.994 "is_configured": false, 00:14:01.994 "data_offset": 2048, 00:14:01.994 "data_size": 63488 00:14:01.994 }, 00:14:01.994 { 00:14:01.994 "name": "pt2", 00:14:01.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.994 "is_configured": true, 00:14:01.994 "data_offset": 2048, 00:14:01.994 "data_size": 63488 00:14:01.994 }, 00:14:01.994 { 00:14:01.994 "name": null, 00:14:01.994 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.994 "is_configured": false, 00:14:01.994 "data_offset": 2048, 00:14:01.994 "data_size": 63488 00:14:01.994 } 00:14:01.994 ] 00:14:01.994 }' 00:14:01.994 16:40:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.994 16:40:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.254 [2024-12-07 16:40:01.124006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:02.254 [2024-12-07 16:40:01.124097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.254 [2024-12-07 16:40:01.124138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:02.254 [2024-12-07 16:40:01.124166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.254 [2024-12-07 16:40:01.124650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.254 [2024-12-07 16:40:01.124719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:02.254 [2024-12-07 16:40:01.124810] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:02.254 [2024-12-07 16:40:01.124842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:02.254 [2024-12-07 16:40:01.124951] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:02.254 [2024-12-07 16:40:01.124959] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:02.254 [2024-12-07 16:40:01.125227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:02.254 [2024-12-07 16:40:01.125735] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:02.254 [2024-12-07 16:40:01.125751] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:14:02.254 [2024-12-07 16:40:01.125980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.254 pt3 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.254 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.513 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.513 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.513 "name": "raid_bdev1", 00:14:02.513 "uuid": "f912d93c-76b6-4889-a143-9d059eab2be7", 00:14:02.513 "strip_size_kb": 64, 00:14:02.513 "state": "online", 00:14:02.513 "raid_level": "raid5f", 00:14:02.513 "superblock": true, 00:14:02.513 "num_base_bdevs": 3, 00:14:02.513 "num_base_bdevs_discovered": 2, 00:14:02.513 "num_base_bdevs_operational": 2, 00:14:02.513 "base_bdevs_list": [ 00:14:02.513 { 00:14:02.513 "name": null, 00:14:02.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.513 "is_configured": false, 00:14:02.513 "data_offset": 2048, 00:14:02.513 "data_size": 63488 00:14:02.513 }, 00:14:02.513 { 00:14:02.513 "name": "pt2", 00:14:02.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.513 "is_configured": true, 00:14:02.513 "data_offset": 2048, 00:14:02.513 "data_size": 63488 00:14:02.513 }, 00:14:02.513 { 00:14:02.513 "name": "pt3", 00:14:02.513 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.513 "is_configured": true, 00:14:02.513 "data_offset": 2048, 00:14:02.513 "data_size": 63488 00:14:02.513 } 00:14:02.513 ] 00:14:02.513 }' 00:14:02.513 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.513 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.772 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.773 [2024-12-07 16:40:01.607263] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.773 [2024-12-07 16:40:01.607364] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.773 [2024-12-07 16:40:01.607457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.773 [2024-12-07 16:40:01.607532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.773 [2024-12-07 16:40:01.607579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.773 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.032 [2024-12-07 16:40:01.683133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:03.032 [2024-12-07 16:40:01.683223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.032 [2024-12-07 16:40:01.683254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:03.032 [2024-12-07 16:40:01.683285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.032 [2024-12-07 16:40:01.685767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.032 [2024-12-07 16:40:01.685832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:03.032 [2024-12-07 16:40:01.685915] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:03.032 [2024-12-07 16:40:01.685989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:03.032 [2024-12-07 16:40:01.686114] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:03.032 [2024-12-07 16:40:01.686170] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.032 [2024-12-07 16:40:01.686253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:14:03.032 [2024-12-07 16:40:01.686330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:03.032 pt1 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.032 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.032 "name": "raid_bdev1", 00:14:03.032 "uuid": "f912d93c-76b6-4889-a143-9d059eab2be7", 00:14:03.032 "strip_size_kb": 64, 00:14:03.032 "state": "configuring", 00:14:03.032 "raid_level": "raid5f", 00:14:03.032 "superblock": true, 00:14:03.032 "num_base_bdevs": 3, 00:14:03.032 "num_base_bdevs_discovered": 1, 00:14:03.032 "num_base_bdevs_operational": 2, 00:14:03.032 "base_bdevs_list": [ 00:14:03.032 { 00:14:03.032 "name": null, 00:14:03.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.032 "is_configured": false, 00:14:03.032 "data_offset": 2048, 00:14:03.032 "data_size": 63488 00:14:03.032 }, 00:14:03.032 { 00:14:03.032 "name": "pt2", 00:14:03.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.032 "is_configured": true, 00:14:03.032 "data_offset": 2048, 00:14:03.032 "data_size": 63488 00:14:03.032 }, 00:14:03.032 { 00:14:03.032 "name": null, 00:14:03.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.033 "is_configured": false, 00:14:03.033 "data_offset": 2048, 00:14:03.033 "data_size": 63488 00:14:03.033 } 00:14:03.033 ] 00:14:03.033 }' 00:14:03.033 16:40:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.033 16:40:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.292 [2024-12-07 16:40:02.106427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:03.292 [2024-12-07 16:40:02.106523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.292 [2024-12-07 16:40:02.106557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:03.292 [2024-12-07 16:40:02.106587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.292 [2024-12-07 16:40:02.107067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.292 [2024-12-07 16:40:02.107130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:03.292 [2024-12-07 16:40:02.107234] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:03.292 [2024-12-07 16:40:02.107288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:03.292 [2024-12-07 16:40:02.107426] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:03.292 [2024-12-07 16:40:02.107469] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:03.292 [2024-12-07 16:40:02.107737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:03.292 [2024-12-07 16:40:02.108283] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:03.292 [2024-12-07 16:40:02.108357] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:03.292 [2024-12-07 16:40:02.108580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.292 pt3 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.292 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.292 "name": "raid_bdev1", 00:14:03.292 "uuid": "f912d93c-76b6-4889-a143-9d059eab2be7", 00:14:03.292 "strip_size_kb": 64, 00:14:03.292 "state": "online", 00:14:03.292 "raid_level": "raid5f", 00:14:03.292 "superblock": true, 00:14:03.292 "num_base_bdevs": 3, 00:14:03.292 "num_base_bdevs_discovered": 2, 00:14:03.292 "num_base_bdevs_operational": 2, 00:14:03.292 "base_bdevs_list": [ 00:14:03.292 { 00:14:03.292 "name": null, 00:14:03.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.293 "is_configured": false, 00:14:03.293 "data_offset": 2048, 00:14:03.293 "data_size": 63488 00:14:03.293 }, 00:14:03.293 { 00:14:03.293 "name": "pt2", 00:14:03.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.293 "is_configured": true, 00:14:03.293 "data_offset": 2048, 00:14:03.293 "data_size": 63488 00:14:03.293 }, 00:14:03.293 { 00:14:03.293 "name": "pt3", 00:14:03.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.293 "is_configured": true, 00:14:03.293 "data_offset": 2048, 00:14:03.293 "data_size": 63488 00:14:03.293 } 00:14:03.293 ] 00:14:03.293 }' 00:14:03.293 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.293 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.861 [2024-12-07 16:40:02.589948] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f912d93c-76b6-4889-a143-9d059eab2be7 '!=' f912d93c-76b6-4889-a143-9d059eab2be7 ']' 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91982 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91982 ']' 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91982 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91982 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91982' 00:14:03.861 killing process with pid 91982 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91982 00:14:03.861 [2024-12-07 16:40:02.669565] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.861 [2024-12-07 16:40:02.669660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.861 [2024-12-07 16:40:02.669726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.861 [2024-12-07 16:40:02.669735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:03.861 16:40:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91982 00:14:03.861 [2024-12-07 16:40:02.727946] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.431 16:40:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:04.431 00:14:04.431 real 0m6.588s 00:14:04.431 user 0m10.716s 00:14:04.431 sys 0m1.513s 00:14:04.431 16:40:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.431 16:40:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.431 ************************************ 00:14:04.431 END TEST raid5f_superblock_test 00:14:04.431 ************************************ 00:14:04.431 16:40:03 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:04.431 16:40:03 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:04.431 16:40:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:04.431 16:40:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.431 16:40:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.431 ************************************ 00:14:04.431 START TEST raid5f_rebuild_test 00:14:04.431 ************************************ 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92415 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92415 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92415 ']' 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:04.431 16:40:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.431 [2024-12-07 16:40:03.280563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:04.431 [2024-12-07 16:40:03.280778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92415 ] 00:14:04.431 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:04.431 Zero copy mechanism will not be used. 00:14:04.691 [2024-12-07 16:40:03.445231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.691 [2024-12-07 16:40:03.514903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.950 [2024-12-07 16:40:03.591016] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.950 [2024-12-07 16:40:03.591147] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.210 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:05.210 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:05.210 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.210 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:05.210 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.210 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.470 BaseBdev1_malloc 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.470 [2024-12-07 16:40:04.121464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:05.470 [2024-12-07 16:40:04.121531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.470 [2024-12-07 16:40:04.121569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:05.470 [2024-12-07 16:40:04.121585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.470 [2024-12-07 16:40:04.124027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.470 [2024-12-07 16:40:04.124066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:05.470 BaseBdev1 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.470 BaseBdev2_malloc 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.470 [2024-12-07 16:40:04.171475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:05.470 [2024-12-07 16:40:04.171665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.470 [2024-12-07 16:40:04.171720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:05.470 [2024-12-07 16:40:04.171742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.470 [2024-12-07 16:40:04.176812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.470 [2024-12-07 16:40:04.176878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:05.470 BaseBdev2 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.470 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.471 BaseBdev3_malloc 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.471 [2024-12-07 16:40:04.208125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:05.471 [2024-12-07 16:40:04.208169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.471 [2024-12-07 16:40:04.208196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:05.471 [2024-12-07 16:40:04.208205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.471 [2024-12-07 16:40:04.210578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.471 [2024-12-07 16:40:04.210644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:05.471 BaseBdev3 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.471 spare_malloc 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.471 spare_delay 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.471 [2024-12-07 16:40:04.254666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:05.471 [2024-12-07 16:40:04.254711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.471 [2024-12-07 16:40:04.254737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:05.471 [2024-12-07 16:40:04.254746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.471 [2024-12-07 16:40:04.257186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.471 [2024-12-07 16:40:04.257219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:05.471 spare 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.471 [2024-12-07 16:40:04.266712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.471 [2024-12-07 16:40:04.268845] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.471 [2024-12-07 16:40:04.268982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.471 [2024-12-07 16:40:04.269069] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:05.471 [2024-12-07 16:40:04.269081] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:05.471 [2024-12-07 16:40:04.269337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:05.471 [2024-12-07 16:40:04.269774] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:05.471 [2024-12-07 16:40:04.269785] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:05.471 [2024-12-07 16:40:04.269911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.471 "name": "raid_bdev1", 00:14:05.471 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:05.471 "strip_size_kb": 64, 00:14:05.471 "state": "online", 00:14:05.471 "raid_level": "raid5f", 00:14:05.471 "superblock": false, 00:14:05.471 "num_base_bdevs": 3, 00:14:05.471 "num_base_bdevs_discovered": 3, 00:14:05.471 "num_base_bdevs_operational": 3, 00:14:05.471 "base_bdevs_list": [ 00:14:05.471 { 00:14:05.471 "name": "BaseBdev1", 00:14:05.471 "uuid": "e93e1bd2-2a18-564c-8f5c-7e57cff24a54", 00:14:05.471 "is_configured": true, 00:14:05.471 "data_offset": 0, 00:14:05.471 "data_size": 65536 00:14:05.471 }, 00:14:05.471 { 00:14:05.471 "name": "BaseBdev2", 00:14:05.471 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:05.471 "is_configured": true, 00:14:05.471 "data_offset": 0, 00:14:05.471 "data_size": 65536 00:14:05.471 }, 00:14:05.471 { 00:14:05.471 "name": "BaseBdev3", 00:14:05.471 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:05.471 "is_configured": true, 00:14:05.471 "data_offset": 0, 00:14:05.471 "data_size": 65536 00:14:05.471 } 00:14:05.471 ] 00:14:05.471 }' 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.471 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.040 [2024-12-07 16:40:04.675665] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.040 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:06.300 [2024-12-07 16:40:04.943090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:06.300 /dev/nbd0 00:14:06.300 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:06.300 16:40:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:06.300 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:06.300 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:06.300 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:06.300 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:06.300 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:06.300 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:06.300 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:06.300 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:06.300 16:40:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.300 1+0 records in 00:14:06.300 1+0 records out 00:14:06.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399397 s, 10.3 MB/s 00:14:06.300 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.300 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:06.300 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.300 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:06.300 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:06.300 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.300 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.300 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:06.300 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:06.300 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:06.300 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:06.560 512+0 records in 00:14:06.560 512+0 records out 00:14:06.560 67108864 bytes (67 MB, 64 MiB) copied, 0.312769 s, 215 MB/s 00:14:06.560 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:06.560 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.560 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:06.560 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:06.560 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:06.560 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.560 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:06.818 [2024-12-07 16:40:05.519192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.818 [2024-12-07 16:40:05.557378] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.818 "name": "raid_bdev1", 00:14:06.818 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:06.818 "strip_size_kb": 64, 00:14:06.818 "state": "online", 00:14:06.818 "raid_level": "raid5f", 00:14:06.818 "superblock": false, 00:14:06.818 "num_base_bdevs": 3, 00:14:06.818 "num_base_bdevs_discovered": 2, 00:14:06.818 "num_base_bdevs_operational": 2, 00:14:06.818 "base_bdevs_list": [ 00:14:06.818 { 00:14:06.818 "name": null, 00:14:06.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.818 "is_configured": false, 00:14:06.818 "data_offset": 0, 00:14:06.818 "data_size": 65536 00:14:06.818 }, 00:14:06.818 { 00:14:06.818 "name": "BaseBdev2", 00:14:06.818 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:06.818 "is_configured": true, 00:14:06.818 "data_offset": 0, 00:14:06.818 "data_size": 65536 00:14:06.818 }, 00:14:06.818 { 00:14:06.818 "name": "BaseBdev3", 00:14:06.818 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:06.818 "is_configured": true, 00:14:06.818 "data_offset": 0, 00:14:06.818 "data_size": 65536 00:14:06.818 } 00:14:06.818 ] 00:14:06.818 }' 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.818 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.387 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:07.387 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.387 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.387 [2024-12-07 16:40:05.984643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.387 [2024-12-07 16:40:05.991230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:14:07.387 16:40:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.387 16:40:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:07.387 [2024-12-07 16:40:05.993691] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.369 16:40:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.369 16:40:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.369 16:40:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.369 16:40:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.369 16:40:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.369 16:40:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.369 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.369 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.369 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.369 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.369 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.369 "name": "raid_bdev1", 00:14:08.369 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:08.369 "strip_size_kb": 64, 00:14:08.369 "state": "online", 00:14:08.369 "raid_level": "raid5f", 00:14:08.369 "superblock": false, 00:14:08.369 "num_base_bdevs": 3, 00:14:08.369 "num_base_bdevs_discovered": 3, 00:14:08.369 "num_base_bdevs_operational": 3, 00:14:08.369 "process": { 00:14:08.369 "type": "rebuild", 00:14:08.369 "target": "spare", 00:14:08.369 "progress": { 00:14:08.369 "blocks": 20480, 00:14:08.369 "percent": 15 00:14:08.369 } 00:14:08.369 }, 00:14:08.369 "base_bdevs_list": [ 00:14:08.369 { 00:14:08.369 "name": "spare", 00:14:08.369 "uuid": "28602622-2330-5d32-985c-318c349ecc24", 00:14:08.369 "is_configured": true, 00:14:08.369 "data_offset": 0, 00:14:08.369 "data_size": 65536 00:14:08.369 }, 00:14:08.369 { 00:14:08.369 "name": "BaseBdev2", 00:14:08.369 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:08.369 "is_configured": true, 00:14:08.369 "data_offset": 0, 00:14:08.369 "data_size": 65536 00:14:08.369 }, 00:14:08.370 { 00:14:08.370 "name": "BaseBdev3", 00:14:08.370 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:08.370 "is_configured": true, 00:14:08.370 "data_offset": 0, 00:14:08.370 "data_size": 65536 00:14:08.370 } 00:14:08.370 ] 00:14:08.370 }' 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.370 [2024-12-07 16:40:07.145486] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.370 [2024-12-07 16:40:07.202137] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:08.370 [2024-12-07 16:40:07.202198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.370 [2024-12-07 16:40:07.202215] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.370 [2024-12-07 16:40:07.202227] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.370 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.629 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.629 "name": "raid_bdev1", 00:14:08.630 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:08.630 "strip_size_kb": 64, 00:14:08.630 "state": "online", 00:14:08.630 "raid_level": "raid5f", 00:14:08.630 "superblock": false, 00:14:08.630 "num_base_bdevs": 3, 00:14:08.630 "num_base_bdevs_discovered": 2, 00:14:08.630 "num_base_bdevs_operational": 2, 00:14:08.630 "base_bdevs_list": [ 00:14:08.630 { 00:14:08.630 "name": null, 00:14:08.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.630 "is_configured": false, 00:14:08.630 "data_offset": 0, 00:14:08.630 "data_size": 65536 00:14:08.630 }, 00:14:08.630 { 00:14:08.630 "name": "BaseBdev2", 00:14:08.630 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:08.630 "is_configured": true, 00:14:08.630 "data_offset": 0, 00:14:08.630 "data_size": 65536 00:14:08.630 }, 00:14:08.630 { 00:14:08.630 "name": "BaseBdev3", 00:14:08.630 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:08.630 "is_configured": true, 00:14:08.630 "data_offset": 0, 00:14:08.630 "data_size": 65536 00:14:08.630 } 00:14:08.630 ] 00:14:08.630 }' 00:14:08.630 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.630 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.889 "name": "raid_bdev1", 00:14:08.889 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:08.889 "strip_size_kb": 64, 00:14:08.889 "state": "online", 00:14:08.889 "raid_level": "raid5f", 00:14:08.889 "superblock": false, 00:14:08.889 "num_base_bdevs": 3, 00:14:08.889 "num_base_bdevs_discovered": 2, 00:14:08.889 "num_base_bdevs_operational": 2, 00:14:08.889 "base_bdevs_list": [ 00:14:08.889 { 00:14:08.889 "name": null, 00:14:08.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.889 "is_configured": false, 00:14:08.889 "data_offset": 0, 00:14:08.889 "data_size": 65536 00:14:08.889 }, 00:14:08.889 { 00:14:08.889 "name": "BaseBdev2", 00:14:08.889 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:08.889 "is_configured": true, 00:14:08.889 "data_offset": 0, 00:14:08.889 "data_size": 65536 00:14:08.889 }, 00:14:08.889 { 00:14:08.889 "name": "BaseBdev3", 00:14:08.889 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:08.889 "is_configured": true, 00:14:08.889 "data_offset": 0, 00:14:08.889 "data_size": 65536 00:14:08.889 } 00:14:08.889 ] 00:14:08.889 }' 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.889 [2024-12-07 16:40:07.758636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.889 [2024-12-07 16:40:07.764955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.889 16:40:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:08.889 [2024-12-07 16:40:07.767408] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.269 "name": "raid_bdev1", 00:14:10.269 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:10.269 "strip_size_kb": 64, 00:14:10.269 "state": "online", 00:14:10.269 "raid_level": "raid5f", 00:14:10.269 "superblock": false, 00:14:10.269 "num_base_bdevs": 3, 00:14:10.269 "num_base_bdevs_discovered": 3, 00:14:10.269 "num_base_bdevs_operational": 3, 00:14:10.269 "process": { 00:14:10.269 "type": "rebuild", 00:14:10.269 "target": "spare", 00:14:10.269 "progress": { 00:14:10.269 "blocks": 20480, 00:14:10.269 "percent": 15 00:14:10.269 } 00:14:10.269 }, 00:14:10.269 "base_bdevs_list": [ 00:14:10.269 { 00:14:10.269 "name": "spare", 00:14:10.269 "uuid": "28602622-2330-5d32-985c-318c349ecc24", 00:14:10.269 "is_configured": true, 00:14:10.269 "data_offset": 0, 00:14:10.269 "data_size": 65536 00:14:10.269 }, 00:14:10.269 { 00:14:10.269 "name": "BaseBdev2", 00:14:10.269 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:10.269 "is_configured": true, 00:14:10.269 "data_offset": 0, 00:14:10.269 "data_size": 65536 00:14:10.269 }, 00:14:10.269 { 00:14:10.269 "name": "BaseBdev3", 00:14:10.269 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:10.269 "is_configured": true, 00:14:10.269 "data_offset": 0, 00:14:10.269 "data_size": 65536 00:14:10.269 } 00:14:10.269 ] 00:14:10.269 }' 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=460 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.269 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.269 "name": "raid_bdev1", 00:14:10.269 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:10.269 "strip_size_kb": 64, 00:14:10.269 "state": "online", 00:14:10.269 "raid_level": "raid5f", 00:14:10.269 "superblock": false, 00:14:10.269 "num_base_bdevs": 3, 00:14:10.269 "num_base_bdevs_discovered": 3, 00:14:10.269 "num_base_bdevs_operational": 3, 00:14:10.269 "process": { 00:14:10.269 "type": "rebuild", 00:14:10.269 "target": "spare", 00:14:10.269 "progress": { 00:14:10.269 "blocks": 22528, 00:14:10.269 "percent": 17 00:14:10.269 } 00:14:10.269 }, 00:14:10.269 "base_bdevs_list": [ 00:14:10.269 { 00:14:10.269 "name": "spare", 00:14:10.269 "uuid": "28602622-2330-5d32-985c-318c349ecc24", 00:14:10.269 "is_configured": true, 00:14:10.269 "data_offset": 0, 00:14:10.269 "data_size": 65536 00:14:10.269 }, 00:14:10.269 { 00:14:10.269 "name": "BaseBdev2", 00:14:10.269 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:10.269 "is_configured": true, 00:14:10.269 "data_offset": 0, 00:14:10.269 "data_size": 65536 00:14:10.269 }, 00:14:10.269 { 00:14:10.269 "name": "BaseBdev3", 00:14:10.269 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:10.269 "is_configured": true, 00:14:10.269 "data_offset": 0, 00:14:10.269 "data_size": 65536 00:14:10.269 } 00:14:10.269 ] 00:14:10.269 }' 00:14:10.270 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.270 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.270 16:40:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.270 16:40:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.270 16:40:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:11.209 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.209 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.209 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.209 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.209 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.209 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.209 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.209 16:40:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.209 16:40:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.209 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.209 16:40:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.209 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.209 "name": "raid_bdev1", 00:14:11.209 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:11.209 "strip_size_kb": 64, 00:14:11.209 "state": "online", 00:14:11.209 "raid_level": "raid5f", 00:14:11.209 "superblock": false, 00:14:11.209 "num_base_bdevs": 3, 00:14:11.209 "num_base_bdevs_discovered": 3, 00:14:11.209 "num_base_bdevs_operational": 3, 00:14:11.209 "process": { 00:14:11.209 "type": "rebuild", 00:14:11.209 "target": "spare", 00:14:11.209 "progress": { 00:14:11.209 "blocks": 45056, 00:14:11.209 "percent": 34 00:14:11.209 } 00:14:11.209 }, 00:14:11.209 "base_bdevs_list": [ 00:14:11.209 { 00:14:11.209 "name": "spare", 00:14:11.209 "uuid": "28602622-2330-5d32-985c-318c349ecc24", 00:14:11.209 "is_configured": true, 00:14:11.209 "data_offset": 0, 00:14:11.209 "data_size": 65536 00:14:11.209 }, 00:14:11.209 { 00:14:11.209 "name": "BaseBdev2", 00:14:11.209 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:11.209 "is_configured": true, 00:14:11.209 "data_offset": 0, 00:14:11.209 "data_size": 65536 00:14:11.209 }, 00:14:11.209 { 00:14:11.209 "name": "BaseBdev3", 00:14:11.209 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:11.209 "is_configured": true, 00:14:11.209 "data_offset": 0, 00:14:11.209 "data_size": 65536 00:14:11.209 } 00:14:11.209 ] 00:14:11.209 }' 00:14:11.470 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.470 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.470 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.470 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.470 16:40:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.459 "name": "raid_bdev1", 00:14:12.459 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:12.459 "strip_size_kb": 64, 00:14:12.459 "state": "online", 00:14:12.459 "raid_level": "raid5f", 00:14:12.459 "superblock": false, 00:14:12.459 "num_base_bdevs": 3, 00:14:12.459 "num_base_bdevs_discovered": 3, 00:14:12.459 "num_base_bdevs_operational": 3, 00:14:12.459 "process": { 00:14:12.459 "type": "rebuild", 00:14:12.459 "target": "spare", 00:14:12.459 "progress": { 00:14:12.459 "blocks": 69632, 00:14:12.459 "percent": 53 00:14:12.459 } 00:14:12.459 }, 00:14:12.459 "base_bdevs_list": [ 00:14:12.459 { 00:14:12.459 "name": "spare", 00:14:12.459 "uuid": "28602622-2330-5d32-985c-318c349ecc24", 00:14:12.459 "is_configured": true, 00:14:12.459 "data_offset": 0, 00:14:12.459 "data_size": 65536 00:14:12.459 }, 00:14:12.459 { 00:14:12.459 "name": "BaseBdev2", 00:14:12.459 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:12.459 "is_configured": true, 00:14:12.459 "data_offset": 0, 00:14:12.459 "data_size": 65536 00:14:12.459 }, 00:14:12.459 { 00:14:12.459 "name": "BaseBdev3", 00:14:12.459 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:12.459 "is_configured": true, 00:14:12.459 "data_offset": 0, 00:14:12.459 "data_size": 65536 00:14:12.459 } 00:14:12.459 ] 00:14:12.459 }' 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.459 16:40:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:13.452 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.452 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.452 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.452 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.452 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.452 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.452 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.452 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.452 16:40:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.452 16:40:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.711 16:40:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.711 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.711 "name": "raid_bdev1", 00:14:13.711 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:13.711 "strip_size_kb": 64, 00:14:13.711 "state": "online", 00:14:13.711 "raid_level": "raid5f", 00:14:13.711 "superblock": false, 00:14:13.711 "num_base_bdevs": 3, 00:14:13.711 "num_base_bdevs_discovered": 3, 00:14:13.711 "num_base_bdevs_operational": 3, 00:14:13.711 "process": { 00:14:13.711 "type": "rebuild", 00:14:13.711 "target": "spare", 00:14:13.711 "progress": { 00:14:13.711 "blocks": 92160, 00:14:13.711 "percent": 70 00:14:13.711 } 00:14:13.711 }, 00:14:13.711 "base_bdevs_list": [ 00:14:13.711 { 00:14:13.711 "name": "spare", 00:14:13.711 "uuid": "28602622-2330-5d32-985c-318c349ecc24", 00:14:13.711 "is_configured": true, 00:14:13.711 "data_offset": 0, 00:14:13.711 "data_size": 65536 00:14:13.711 }, 00:14:13.711 { 00:14:13.711 "name": "BaseBdev2", 00:14:13.711 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:13.711 "is_configured": true, 00:14:13.711 "data_offset": 0, 00:14:13.711 "data_size": 65536 00:14:13.711 }, 00:14:13.711 { 00:14:13.711 "name": "BaseBdev3", 00:14:13.711 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:13.711 "is_configured": true, 00:14:13.711 "data_offset": 0, 00:14:13.711 "data_size": 65536 00:14:13.711 } 00:14:13.711 ] 00:14:13.711 }' 00:14:13.711 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.711 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.711 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.711 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.711 16:40:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:14.651 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:14.651 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.651 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.651 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.651 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.651 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.651 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.651 16:40:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.651 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.651 16:40:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.651 16:40:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.651 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.651 "name": "raid_bdev1", 00:14:14.651 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:14.651 "strip_size_kb": 64, 00:14:14.651 "state": "online", 00:14:14.651 "raid_level": "raid5f", 00:14:14.651 "superblock": false, 00:14:14.651 "num_base_bdevs": 3, 00:14:14.651 "num_base_bdevs_discovered": 3, 00:14:14.651 "num_base_bdevs_operational": 3, 00:14:14.651 "process": { 00:14:14.651 "type": "rebuild", 00:14:14.651 "target": "spare", 00:14:14.651 "progress": { 00:14:14.651 "blocks": 114688, 00:14:14.651 "percent": 87 00:14:14.651 } 00:14:14.651 }, 00:14:14.651 "base_bdevs_list": [ 00:14:14.651 { 00:14:14.651 "name": "spare", 00:14:14.651 "uuid": "28602622-2330-5d32-985c-318c349ecc24", 00:14:14.651 "is_configured": true, 00:14:14.651 "data_offset": 0, 00:14:14.651 "data_size": 65536 00:14:14.651 }, 00:14:14.651 { 00:14:14.651 "name": "BaseBdev2", 00:14:14.651 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:14.651 "is_configured": true, 00:14:14.651 "data_offset": 0, 00:14:14.651 "data_size": 65536 00:14:14.651 }, 00:14:14.651 { 00:14:14.651 "name": "BaseBdev3", 00:14:14.651 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:14.651 "is_configured": true, 00:14:14.651 "data_offset": 0, 00:14:14.651 "data_size": 65536 00:14:14.651 } 00:14:14.651 ] 00:14:14.651 }' 00:14:14.651 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.910 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.910 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.910 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.910 16:40:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:15.480 [2024-12-07 16:40:14.208988] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:15.480 [2024-12-07 16:40:14.209060] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:15.480 [2024-12-07 16:40:14.209110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.740 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.740 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.740 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.740 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.740 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.740 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.740 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.740 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.740 16:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.740 16:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 16:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.001 "name": "raid_bdev1", 00:14:16.001 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:16.001 "strip_size_kb": 64, 00:14:16.001 "state": "online", 00:14:16.001 "raid_level": "raid5f", 00:14:16.001 "superblock": false, 00:14:16.001 "num_base_bdevs": 3, 00:14:16.001 "num_base_bdevs_discovered": 3, 00:14:16.001 "num_base_bdevs_operational": 3, 00:14:16.001 "base_bdevs_list": [ 00:14:16.001 { 00:14:16.001 "name": "spare", 00:14:16.001 "uuid": "28602622-2330-5d32-985c-318c349ecc24", 00:14:16.001 "is_configured": true, 00:14:16.001 "data_offset": 0, 00:14:16.001 "data_size": 65536 00:14:16.001 }, 00:14:16.001 { 00:14:16.001 "name": "BaseBdev2", 00:14:16.001 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:16.001 "is_configured": true, 00:14:16.001 "data_offset": 0, 00:14:16.001 "data_size": 65536 00:14:16.001 }, 00:14:16.001 { 00:14:16.001 "name": "BaseBdev3", 00:14:16.001 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:16.001 "is_configured": true, 00:14:16.001 "data_offset": 0, 00:14:16.001 "data_size": 65536 00:14:16.001 } 00:14:16.001 ] 00:14:16.001 }' 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.001 "name": "raid_bdev1", 00:14:16.001 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:16.001 "strip_size_kb": 64, 00:14:16.001 "state": "online", 00:14:16.001 "raid_level": "raid5f", 00:14:16.001 "superblock": false, 00:14:16.001 "num_base_bdevs": 3, 00:14:16.001 "num_base_bdevs_discovered": 3, 00:14:16.001 "num_base_bdevs_operational": 3, 00:14:16.001 "base_bdevs_list": [ 00:14:16.001 { 00:14:16.001 "name": "spare", 00:14:16.001 "uuid": "28602622-2330-5d32-985c-318c349ecc24", 00:14:16.001 "is_configured": true, 00:14:16.001 "data_offset": 0, 00:14:16.001 "data_size": 65536 00:14:16.001 }, 00:14:16.001 { 00:14:16.001 "name": "BaseBdev2", 00:14:16.001 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:16.001 "is_configured": true, 00:14:16.001 "data_offset": 0, 00:14:16.001 "data_size": 65536 00:14:16.001 }, 00:14:16.001 { 00:14:16.001 "name": "BaseBdev3", 00:14:16.001 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:16.001 "is_configured": true, 00:14:16.001 "data_offset": 0, 00:14:16.001 "data_size": 65536 00:14:16.001 } 00:14:16.001 ] 00:14:16.001 }' 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.001 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.261 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.261 "name": "raid_bdev1", 00:14:16.261 "uuid": "25f1a26b-b55d-428c-84e0-b9ef09b00252", 00:14:16.261 "strip_size_kb": 64, 00:14:16.261 "state": "online", 00:14:16.261 "raid_level": "raid5f", 00:14:16.261 "superblock": false, 00:14:16.261 "num_base_bdevs": 3, 00:14:16.261 "num_base_bdevs_discovered": 3, 00:14:16.261 "num_base_bdevs_operational": 3, 00:14:16.261 "base_bdevs_list": [ 00:14:16.261 { 00:14:16.261 "name": "spare", 00:14:16.261 "uuid": "28602622-2330-5d32-985c-318c349ecc24", 00:14:16.261 "is_configured": true, 00:14:16.261 "data_offset": 0, 00:14:16.261 "data_size": 65536 00:14:16.261 }, 00:14:16.261 { 00:14:16.261 "name": "BaseBdev2", 00:14:16.261 "uuid": "a6655a9e-fdde-5cf9-a4ae-6dc71822b4bd", 00:14:16.261 "is_configured": true, 00:14:16.261 "data_offset": 0, 00:14:16.261 "data_size": 65536 00:14:16.261 }, 00:14:16.261 { 00:14:16.262 "name": "BaseBdev3", 00:14:16.262 "uuid": "1958657e-e115-57c1-b9d4-e61346043576", 00:14:16.262 "is_configured": true, 00:14:16.262 "data_offset": 0, 00:14:16.262 "data_size": 65536 00:14:16.262 } 00:14:16.262 ] 00:14:16.262 }' 00:14:16.262 16:40:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.262 16:40:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.520 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:16.520 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.520 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.520 [2024-12-07 16:40:15.351846] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:16.520 [2024-12-07 16:40:15.351933] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.520 [2024-12-07 16:40:15.352060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.520 [2024-12-07 16:40:15.352184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.520 [2024-12-07 16:40:15.352219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:16.520 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.520 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.520 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:16.520 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.520 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.520 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.520 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:16.520 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:16.521 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:16.521 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:16.521 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.521 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:16.521 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:16.521 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:16.521 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:16.521 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:16.521 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:16.521 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:16.521 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:16.779 /dev/nbd0 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:16.780 1+0 records in 00:14:16.780 1+0 records out 00:14:16.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390139 s, 10.5 MB/s 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:16.780 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:17.039 /dev/nbd1 00:14:17.039 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:17.039 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:17.039 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:17.039 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:17.039 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:17.039 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:17.039 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:17.039 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:17.039 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:17.039 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:17.040 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:17.040 1+0 records in 00:14:17.040 1+0 records out 00:14:17.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297495 s, 13.8 MB/s 00:14:17.040 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.040 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:17.040 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.040 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:17.040 16:40:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:17.040 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:17.040 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:17.040 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:17.299 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:17.300 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.300 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:17.300 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:17.300 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:17.300 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.300 16:40:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:17.300 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:17.300 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:17.300 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:17.300 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.300 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.300 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:17.300 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:17.300 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.300 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.300 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92415 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92415 ']' 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92415 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92415 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92415' 00:14:17.560 killing process with pid 92415 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92415 00:14:17.560 Received shutdown signal, test time was about 60.000000 seconds 00:14:17.560 00:14:17.560 Latency(us) 00:14:17.560 [2024-12-07T16:40:16.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.560 [2024-12-07T16:40:16.459Z] =================================================================================================================== 00:14:17.560 [2024-12-07T16:40:16.459Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:17.560 [2024-12-07 16:40:16.408481] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.560 16:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92415 00:14:17.819 [2024-12-07 16:40:16.483066] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:18.079 00:14:18.079 real 0m13.655s 00:14:18.079 user 0m16.720s 00:14:18.079 sys 0m2.150s 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.079 ************************************ 00:14:18.079 END TEST raid5f_rebuild_test 00:14:18.079 ************************************ 00:14:18.079 16:40:16 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:18.079 16:40:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:18.079 16:40:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.079 16:40:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.079 ************************************ 00:14:18.079 START TEST raid5f_rebuild_test_sb 00:14:18.079 ************************************ 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92834 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92834 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92834 ']' 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.079 16:40:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.339 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:18.339 Zero copy mechanism will not be used. 00:14:18.339 [2024-12-07 16:40:17.014649] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:18.339 [2024-12-07 16:40:17.014803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92834 ] 00:14:18.339 [2024-12-07 16:40:17.181155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.599 [2024-12-07 16:40:17.250243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.599 [2024-12-07 16:40:17.325817] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.599 [2024-12-07 16:40:17.325851] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.170 BaseBdev1_malloc 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.170 [2024-12-07 16:40:17.859968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:19.170 [2024-12-07 16:40:17.860035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.170 [2024-12-07 16:40:17.860065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:19.170 [2024-12-07 16:40:17.860088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.170 [2024-12-07 16:40:17.862521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.170 [2024-12-07 16:40:17.862560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:19.170 BaseBdev1 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.170 BaseBdev2_malloc 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.170 [2024-12-07 16:40:17.909799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:19.170 [2024-12-07 16:40:17.909898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.170 [2024-12-07 16:40:17.909945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:19.170 [2024-12-07 16:40:17.909965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.170 [2024-12-07 16:40:17.914390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.170 [2024-12-07 16:40:17.914435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:19.170 BaseBdev2 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.170 BaseBdev3_malloc 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.170 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.171 [2024-12-07 16:40:17.945113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:19.171 [2024-12-07 16:40:17.945194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.171 [2024-12-07 16:40:17.945241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:19.171 [2024-12-07 16:40:17.945250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.171 [2024-12-07 16:40:17.947563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.171 [2024-12-07 16:40:17.947593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:19.171 BaseBdev3 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.171 spare_malloc 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.171 spare_delay 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.171 [2024-12-07 16:40:17.991376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:19.171 [2024-12-07 16:40:17.991418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.171 [2024-12-07 16:40:17.991444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:19.171 [2024-12-07 16:40:17.991452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.171 [2024-12-07 16:40:17.993794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.171 [2024-12-07 16:40:17.993827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:19.171 spare 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.171 16:40:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.171 [2024-12-07 16:40:18.003437] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.171 [2024-12-07 16:40:18.005487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.171 [2024-12-07 16:40:18.005552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.171 [2024-12-07 16:40:18.005702] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:19.171 [2024-12-07 16:40:18.005715] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:19.171 [2024-12-07 16:40:18.005968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:19.171 [2024-12-07 16:40:18.006397] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:19.171 [2024-12-07 16:40:18.006409] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:19.171 [2024-12-07 16:40:18.006524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.171 "name": "raid_bdev1", 00:14:19.171 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:19.171 "strip_size_kb": 64, 00:14:19.171 "state": "online", 00:14:19.171 "raid_level": "raid5f", 00:14:19.171 "superblock": true, 00:14:19.171 "num_base_bdevs": 3, 00:14:19.171 "num_base_bdevs_discovered": 3, 00:14:19.171 "num_base_bdevs_operational": 3, 00:14:19.171 "base_bdevs_list": [ 00:14:19.171 { 00:14:19.171 "name": "BaseBdev1", 00:14:19.171 "uuid": "6e2bd6a7-5a50-58f9-9fd1-b6a9d0bc5954", 00:14:19.171 "is_configured": true, 00:14:19.171 "data_offset": 2048, 00:14:19.171 "data_size": 63488 00:14:19.171 }, 00:14:19.171 { 00:14:19.171 "name": "BaseBdev2", 00:14:19.171 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:19.171 "is_configured": true, 00:14:19.171 "data_offset": 2048, 00:14:19.171 "data_size": 63488 00:14:19.171 }, 00:14:19.171 { 00:14:19.171 "name": "BaseBdev3", 00:14:19.171 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:19.171 "is_configured": true, 00:14:19.171 "data_offset": 2048, 00:14:19.171 "data_size": 63488 00:14:19.171 } 00:14:19.171 ] 00:14:19.171 }' 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.171 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.741 [2024-12-07 16:40:18.408271] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:19.741 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:20.002 [2024-12-07 16:40:18.683763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:20.002 /dev/nbd0 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.002 1+0 records in 00:14:20.002 1+0 records out 00:14:20.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231439 s, 17.7 MB/s 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:20.002 16:40:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:20.263 496+0 records in 00:14:20.263 496+0 records out 00:14:20.263 65011712 bytes (65 MB, 62 MiB) copied, 0.293386 s, 222 MB/s 00:14:20.263 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:20.263 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.263 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:20.263 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:20.263 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:20.263 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.263 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:20.523 [2024-12-07 16:40:19.269105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.523 [2024-12-07 16:40:19.283312] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.523 "name": "raid_bdev1", 00:14:20.523 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:20.523 "strip_size_kb": 64, 00:14:20.523 "state": "online", 00:14:20.523 "raid_level": "raid5f", 00:14:20.523 "superblock": true, 00:14:20.523 "num_base_bdevs": 3, 00:14:20.523 "num_base_bdevs_discovered": 2, 00:14:20.523 "num_base_bdevs_operational": 2, 00:14:20.523 "base_bdevs_list": [ 00:14:20.523 { 00:14:20.523 "name": null, 00:14:20.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.523 "is_configured": false, 00:14:20.523 "data_offset": 0, 00:14:20.523 "data_size": 63488 00:14:20.523 }, 00:14:20.523 { 00:14:20.523 "name": "BaseBdev2", 00:14:20.523 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:20.523 "is_configured": true, 00:14:20.523 "data_offset": 2048, 00:14:20.523 "data_size": 63488 00:14:20.523 }, 00:14:20.523 { 00:14:20.523 "name": "BaseBdev3", 00:14:20.523 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:20.523 "is_configured": true, 00:14:20.523 "data_offset": 2048, 00:14:20.523 "data_size": 63488 00:14:20.523 } 00:14:20.523 ] 00:14:20.523 }' 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.523 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.093 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.093 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.093 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.093 [2024-12-07 16:40:19.750503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.093 [2024-12-07 16:40:19.757157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:14:21.093 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.093 16:40:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:21.093 [2024-12-07 16:40:19.759659] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:22.033 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.033 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.033 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.033 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.033 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.033 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.033 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.033 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.033 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.033 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.033 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.033 "name": "raid_bdev1", 00:14:22.033 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:22.033 "strip_size_kb": 64, 00:14:22.033 "state": "online", 00:14:22.033 "raid_level": "raid5f", 00:14:22.033 "superblock": true, 00:14:22.033 "num_base_bdevs": 3, 00:14:22.033 "num_base_bdevs_discovered": 3, 00:14:22.033 "num_base_bdevs_operational": 3, 00:14:22.033 "process": { 00:14:22.033 "type": "rebuild", 00:14:22.033 "target": "spare", 00:14:22.033 "progress": { 00:14:22.033 "blocks": 20480, 00:14:22.033 "percent": 16 00:14:22.033 } 00:14:22.033 }, 00:14:22.034 "base_bdevs_list": [ 00:14:22.034 { 00:14:22.034 "name": "spare", 00:14:22.034 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:22.034 "is_configured": true, 00:14:22.034 "data_offset": 2048, 00:14:22.034 "data_size": 63488 00:14:22.034 }, 00:14:22.034 { 00:14:22.034 "name": "BaseBdev2", 00:14:22.034 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:22.034 "is_configured": true, 00:14:22.034 "data_offset": 2048, 00:14:22.034 "data_size": 63488 00:14:22.034 }, 00:14:22.034 { 00:14:22.034 "name": "BaseBdev3", 00:14:22.034 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:22.034 "is_configured": true, 00:14:22.034 "data_offset": 2048, 00:14:22.034 "data_size": 63488 00:14:22.034 } 00:14:22.034 ] 00:14:22.034 }' 00:14:22.034 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.034 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.034 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.034 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.034 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:22.034 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.034 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.034 [2024-12-07 16:40:20.911208] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:22.294 [2024-12-07 16:40:20.967930] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:22.294 [2024-12-07 16:40:20.967991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.294 [2024-12-07 16:40:20.968008] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:22.294 [2024-12-07 16:40:20.968030] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.294 16:40:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.294 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.294 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.294 "name": "raid_bdev1", 00:14:22.294 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:22.294 "strip_size_kb": 64, 00:14:22.294 "state": "online", 00:14:22.294 "raid_level": "raid5f", 00:14:22.294 "superblock": true, 00:14:22.294 "num_base_bdevs": 3, 00:14:22.294 "num_base_bdevs_discovered": 2, 00:14:22.294 "num_base_bdevs_operational": 2, 00:14:22.294 "base_bdevs_list": [ 00:14:22.294 { 00:14:22.294 "name": null, 00:14:22.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.294 "is_configured": false, 00:14:22.294 "data_offset": 0, 00:14:22.294 "data_size": 63488 00:14:22.294 }, 00:14:22.294 { 00:14:22.294 "name": "BaseBdev2", 00:14:22.294 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:22.294 "is_configured": true, 00:14:22.294 "data_offset": 2048, 00:14:22.294 "data_size": 63488 00:14:22.294 }, 00:14:22.294 { 00:14:22.294 "name": "BaseBdev3", 00:14:22.294 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:22.294 "is_configured": true, 00:14:22.294 "data_offset": 2048, 00:14:22.294 "data_size": 63488 00:14:22.294 } 00:14:22.294 ] 00:14:22.294 }' 00:14:22.294 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.294 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.555 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:22.555 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.555 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:22.555 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:22.555 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.555 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.555 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.555 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.555 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.816 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.816 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.816 "name": "raid_bdev1", 00:14:22.816 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:22.816 "strip_size_kb": 64, 00:14:22.816 "state": "online", 00:14:22.816 "raid_level": "raid5f", 00:14:22.816 "superblock": true, 00:14:22.816 "num_base_bdevs": 3, 00:14:22.816 "num_base_bdevs_discovered": 2, 00:14:22.816 "num_base_bdevs_operational": 2, 00:14:22.816 "base_bdevs_list": [ 00:14:22.816 { 00:14:22.816 "name": null, 00:14:22.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.816 "is_configured": false, 00:14:22.816 "data_offset": 0, 00:14:22.816 "data_size": 63488 00:14:22.816 }, 00:14:22.816 { 00:14:22.816 "name": "BaseBdev2", 00:14:22.816 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:22.816 "is_configured": true, 00:14:22.816 "data_offset": 2048, 00:14:22.816 "data_size": 63488 00:14:22.816 }, 00:14:22.816 { 00:14:22.816 "name": "BaseBdev3", 00:14:22.816 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:22.816 "is_configured": true, 00:14:22.816 "data_offset": 2048, 00:14:22.816 "data_size": 63488 00:14:22.816 } 00:14:22.816 ] 00:14:22.816 }' 00:14:22.816 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.816 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.816 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.816 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.816 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:22.816 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.816 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.816 [2024-12-07 16:40:21.551827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:22.816 [2024-12-07 16:40:21.557736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:14:22.816 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.816 16:40:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:22.816 [2024-12-07 16:40:21.560227] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:23.757 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.757 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.757 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.757 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.757 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.757 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.757 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.757 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.757 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.757 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.757 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.757 "name": "raid_bdev1", 00:14:23.757 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:23.757 "strip_size_kb": 64, 00:14:23.757 "state": "online", 00:14:23.757 "raid_level": "raid5f", 00:14:23.757 "superblock": true, 00:14:23.757 "num_base_bdevs": 3, 00:14:23.757 "num_base_bdevs_discovered": 3, 00:14:23.757 "num_base_bdevs_operational": 3, 00:14:23.757 "process": { 00:14:23.757 "type": "rebuild", 00:14:23.757 "target": "spare", 00:14:23.757 "progress": { 00:14:23.757 "blocks": 20480, 00:14:23.757 "percent": 16 00:14:23.757 } 00:14:23.757 }, 00:14:23.757 "base_bdevs_list": [ 00:14:23.757 { 00:14:23.757 "name": "spare", 00:14:23.757 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:23.757 "is_configured": true, 00:14:23.757 "data_offset": 2048, 00:14:23.757 "data_size": 63488 00:14:23.757 }, 00:14:23.757 { 00:14:23.757 "name": "BaseBdev2", 00:14:23.757 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:23.757 "is_configured": true, 00:14:23.757 "data_offset": 2048, 00:14:23.757 "data_size": 63488 00:14:23.757 }, 00:14:23.757 { 00:14:23.757 "name": "BaseBdev3", 00:14:23.757 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:23.757 "is_configured": true, 00:14:23.757 "data_offset": 2048, 00:14:23.757 "data_size": 63488 00:14:23.757 } 00:14:23.757 ] 00:14:23.757 }' 00:14:23.757 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:24.017 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=474 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.017 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.017 "name": "raid_bdev1", 00:14:24.017 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:24.017 "strip_size_kb": 64, 00:14:24.017 "state": "online", 00:14:24.017 "raid_level": "raid5f", 00:14:24.017 "superblock": true, 00:14:24.017 "num_base_bdevs": 3, 00:14:24.017 "num_base_bdevs_discovered": 3, 00:14:24.017 "num_base_bdevs_operational": 3, 00:14:24.017 "process": { 00:14:24.018 "type": "rebuild", 00:14:24.018 "target": "spare", 00:14:24.018 "progress": { 00:14:24.018 "blocks": 22528, 00:14:24.018 "percent": 17 00:14:24.018 } 00:14:24.018 }, 00:14:24.018 "base_bdevs_list": [ 00:14:24.018 { 00:14:24.018 "name": "spare", 00:14:24.018 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:24.018 "is_configured": true, 00:14:24.018 "data_offset": 2048, 00:14:24.018 "data_size": 63488 00:14:24.018 }, 00:14:24.018 { 00:14:24.018 "name": "BaseBdev2", 00:14:24.018 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:24.018 "is_configured": true, 00:14:24.018 "data_offset": 2048, 00:14:24.018 "data_size": 63488 00:14:24.018 }, 00:14:24.018 { 00:14:24.018 "name": "BaseBdev3", 00:14:24.018 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:24.018 "is_configured": true, 00:14:24.018 "data_offset": 2048, 00:14:24.018 "data_size": 63488 00:14:24.018 } 00:14:24.018 ] 00:14:24.018 }' 00:14:24.018 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.018 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.018 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.018 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.018 16:40:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.399 "name": "raid_bdev1", 00:14:25.399 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:25.399 "strip_size_kb": 64, 00:14:25.399 "state": "online", 00:14:25.399 "raid_level": "raid5f", 00:14:25.399 "superblock": true, 00:14:25.399 "num_base_bdevs": 3, 00:14:25.399 "num_base_bdevs_discovered": 3, 00:14:25.399 "num_base_bdevs_operational": 3, 00:14:25.399 "process": { 00:14:25.399 "type": "rebuild", 00:14:25.399 "target": "spare", 00:14:25.399 "progress": { 00:14:25.399 "blocks": 47104, 00:14:25.399 "percent": 37 00:14:25.399 } 00:14:25.399 }, 00:14:25.399 "base_bdevs_list": [ 00:14:25.399 { 00:14:25.399 "name": "spare", 00:14:25.399 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:25.399 "is_configured": true, 00:14:25.399 "data_offset": 2048, 00:14:25.399 "data_size": 63488 00:14:25.399 }, 00:14:25.399 { 00:14:25.399 "name": "BaseBdev2", 00:14:25.399 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:25.399 "is_configured": true, 00:14:25.399 "data_offset": 2048, 00:14:25.399 "data_size": 63488 00:14:25.399 }, 00:14:25.399 { 00:14:25.399 "name": "BaseBdev3", 00:14:25.399 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:25.399 "is_configured": true, 00:14:25.399 "data_offset": 2048, 00:14:25.399 "data_size": 63488 00:14:25.399 } 00:14:25.399 ] 00:14:25.399 }' 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.399 16:40:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.399 16:40:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.399 16:40:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.347 "name": "raid_bdev1", 00:14:26.347 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:26.347 "strip_size_kb": 64, 00:14:26.347 "state": "online", 00:14:26.347 "raid_level": "raid5f", 00:14:26.347 "superblock": true, 00:14:26.347 "num_base_bdevs": 3, 00:14:26.347 "num_base_bdevs_discovered": 3, 00:14:26.347 "num_base_bdevs_operational": 3, 00:14:26.347 "process": { 00:14:26.347 "type": "rebuild", 00:14:26.347 "target": "spare", 00:14:26.347 "progress": { 00:14:26.347 "blocks": 69632, 00:14:26.347 "percent": 54 00:14:26.347 } 00:14:26.347 }, 00:14:26.347 "base_bdevs_list": [ 00:14:26.347 { 00:14:26.347 "name": "spare", 00:14:26.347 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:26.347 "is_configured": true, 00:14:26.347 "data_offset": 2048, 00:14:26.347 "data_size": 63488 00:14:26.347 }, 00:14:26.347 { 00:14:26.347 "name": "BaseBdev2", 00:14:26.347 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:26.347 "is_configured": true, 00:14:26.347 "data_offset": 2048, 00:14:26.347 "data_size": 63488 00:14:26.347 }, 00:14:26.347 { 00:14:26.347 "name": "BaseBdev3", 00:14:26.347 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:26.347 "is_configured": true, 00:14:26.347 "data_offset": 2048, 00:14:26.347 "data_size": 63488 00:14:26.347 } 00:14:26.347 ] 00:14:26.347 }' 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.347 16:40:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.284 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.284 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.284 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.284 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.284 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.284 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.284 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.284 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.284 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.284 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.544 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.544 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.544 "name": "raid_bdev1", 00:14:27.544 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:27.544 "strip_size_kb": 64, 00:14:27.544 "state": "online", 00:14:27.544 "raid_level": "raid5f", 00:14:27.544 "superblock": true, 00:14:27.544 "num_base_bdevs": 3, 00:14:27.544 "num_base_bdevs_discovered": 3, 00:14:27.544 "num_base_bdevs_operational": 3, 00:14:27.544 "process": { 00:14:27.544 "type": "rebuild", 00:14:27.544 "target": "spare", 00:14:27.544 "progress": { 00:14:27.544 "blocks": 92160, 00:14:27.544 "percent": 72 00:14:27.544 } 00:14:27.544 }, 00:14:27.544 "base_bdevs_list": [ 00:14:27.544 { 00:14:27.544 "name": "spare", 00:14:27.544 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:27.544 "is_configured": true, 00:14:27.544 "data_offset": 2048, 00:14:27.544 "data_size": 63488 00:14:27.544 }, 00:14:27.544 { 00:14:27.544 "name": "BaseBdev2", 00:14:27.544 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:27.544 "is_configured": true, 00:14:27.544 "data_offset": 2048, 00:14:27.544 "data_size": 63488 00:14:27.544 }, 00:14:27.544 { 00:14:27.544 "name": "BaseBdev3", 00:14:27.544 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:27.544 "is_configured": true, 00:14:27.544 "data_offset": 2048, 00:14:27.544 "data_size": 63488 00:14:27.544 } 00:14:27.544 ] 00:14:27.544 }' 00:14:27.544 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.544 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.544 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.544 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.544 16:40:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:28.482 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.482 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.482 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.482 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.483 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.483 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.483 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.483 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.483 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.483 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.483 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.483 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.483 "name": "raid_bdev1", 00:14:28.483 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:28.483 "strip_size_kb": 64, 00:14:28.483 "state": "online", 00:14:28.483 "raid_level": "raid5f", 00:14:28.483 "superblock": true, 00:14:28.483 "num_base_bdevs": 3, 00:14:28.483 "num_base_bdevs_discovered": 3, 00:14:28.483 "num_base_bdevs_operational": 3, 00:14:28.483 "process": { 00:14:28.483 "type": "rebuild", 00:14:28.483 "target": "spare", 00:14:28.483 "progress": { 00:14:28.483 "blocks": 116736, 00:14:28.483 "percent": 91 00:14:28.483 } 00:14:28.483 }, 00:14:28.483 "base_bdevs_list": [ 00:14:28.483 { 00:14:28.483 "name": "spare", 00:14:28.483 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:28.483 "is_configured": true, 00:14:28.483 "data_offset": 2048, 00:14:28.483 "data_size": 63488 00:14:28.483 }, 00:14:28.483 { 00:14:28.483 "name": "BaseBdev2", 00:14:28.483 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:28.483 "is_configured": true, 00:14:28.483 "data_offset": 2048, 00:14:28.483 "data_size": 63488 00:14:28.483 }, 00:14:28.483 { 00:14:28.483 "name": "BaseBdev3", 00:14:28.483 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:28.483 "is_configured": true, 00:14:28.483 "data_offset": 2048, 00:14:28.483 "data_size": 63488 00:14:28.483 } 00:14:28.483 ] 00:14:28.483 }' 00:14:28.483 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.742 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.742 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.742 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.742 16:40:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.002 [2024-12-07 16:40:27.798868] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:29.002 [2024-12-07 16:40:27.798979] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:29.002 [2024-12-07 16:40:27.799109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.570 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.570 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.570 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.570 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.570 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.570 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.829 "name": "raid_bdev1", 00:14:29.829 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:29.829 "strip_size_kb": 64, 00:14:29.829 "state": "online", 00:14:29.829 "raid_level": "raid5f", 00:14:29.829 "superblock": true, 00:14:29.829 "num_base_bdevs": 3, 00:14:29.829 "num_base_bdevs_discovered": 3, 00:14:29.829 "num_base_bdevs_operational": 3, 00:14:29.829 "base_bdevs_list": [ 00:14:29.829 { 00:14:29.829 "name": "spare", 00:14:29.829 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:29.829 "is_configured": true, 00:14:29.829 "data_offset": 2048, 00:14:29.829 "data_size": 63488 00:14:29.829 }, 00:14:29.829 { 00:14:29.829 "name": "BaseBdev2", 00:14:29.829 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:29.829 "is_configured": true, 00:14:29.829 "data_offset": 2048, 00:14:29.829 "data_size": 63488 00:14:29.829 }, 00:14:29.829 { 00:14:29.829 "name": "BaseBdev3", 00:14:29.829 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:29.829 "is_configured": true, 00:14:29.829 "data_offset": 2048, 00:14:29.829 "data_size": 63488 00:14:29.829 } 00:14:29.829 ] 00:14:29.829 }' 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.829 "name": "raid_bdev1", 00:14:29.829 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:29.829 "strip_size_kb": 64, 00:14:29.829 "state": "online", 00:14:29.829 "raid_level": "raid5f", 00:14:29.829 "superblock": true, 00:14:29.829 "num_base_bdevs": 3, 00:14:29.829 "num_base_bdevs_discovered": 3, 00:14:29.829 "num_base_bdevs_operational": 3, 00:14:29.829 "base_bdevs_list": [ 00:14:29.829 { 00:14:29.829 "name": "spare", 00:14:29.829 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:29.829 "is_configured": true, 00:14:29.829 "data_offset": 2048, 00:14:29.829 "data_size": 63488 00:14:29.829 }, 00:14:29.829 { 00:14:29.829 "name": "BaseBdev2", 00:14:29.829 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:29.829 "is_configured": true, 00:14:29.829 "data_offset": 2048, 00:14:29.829 "data_size": 63488 00:14:29.829 }, 00:14:29.829 { 00:14:29.829 "name": "BaseBdev3", 00:14:29.829 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:29.829 "is_configured": true, 00:14:29.829 "data_offset": 2048, 00:14:29.829 "data_size": 63488 00:14:29.829 } 00:14:29.829 ] 00:14:29.829 }' 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:29.829 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.088 "name": "raid_bdev1", 00:14:30.088 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:30.088 "strip_size_kb": 64, 00:14:30.088 "state": "online", 00:14:30.088 "raid_level": "raid5f", 00:14:30.088 "superblock": true, 00:14:30.088 "num_base_bdevs": 3, 00:14:30.088 "num_base_bdevs_discovered": 3, 00:14:30.088 "num_base_bdevs_operational": 3, 00:14:30.088 "base_bdevs_list": [ 00:14:30.088 { 00:14:30.088 "name": "spare", 00:14:30.088 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:30.088 "is_configured": true, 00:14:30.088 "data_offset": 2048, 00:14:30.088 "data_size": 63488 00:14:30.088 }, 00:14:30.088 { 00:14:30.088 "name": "BaseBdev2", 00:14:30.088 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:30.088 "is_configured": true, 00:14:30.088 "data_offset": 2048, 00:14:30.088 "data_size": 63488 00:14:30.088 }, 00:14:30.088 { 00:14:30.088 "name": "BaseBdev3", 00:14:30.088 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:30.088 "is_configured": true, 00:14:30.088 "data_offset": 2048, 00:14:30.088 "data_size": 63488 00:14:30.088 } 00:14:30.088 ] 00:14:30.088 }' 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.088 16:40:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.657 [2024-12-07 16:40:29.261173] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.657 [2024-12-07 16:40:29.261252] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.657 [2024-12-07 16:40:29.261381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.657 [2024-12-07 16:40:29.261503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.657 [2024-12-07 16:40:29.261557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:30.657 /dev/nbd0 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:30.657 1+0 records in 00:14:30.657 1+0 records out 00:14:30.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039133 s, 10.5 MB/s 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:30.657 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:30.918 /dev/nbd1 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:30.918 1+0 records in 00:14:30.918 1+0 records out 00:14:30.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271725 s, 15.1 MB/s 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:30.918 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:31.177 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:31.177 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.177 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:31.177 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:31.177 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:31.177 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.178 16:40:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.438 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.699 [2024-12-07 16:40:30.347943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:31.699 [2024-12-07 16:40:30.348005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.699 [2024-12-07 16:40:30.348032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:31.699 [2024-12-07 16:40:30.348042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.699 [2024-12-07 16:40:30.350515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.699 [2024-12-07 16:40:30.350597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:31.699 [2024-12-07 16:40:30.350696] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:31.699 [2024-12-07 16:40:30.350740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:31.699 [2024-12-07 16:40:30.350856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:31.699 [2024-12-07 16:40:30.350962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.699 spare 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.699 [2024-12-07 16:40:30.450856] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:31.699 [2024-12-07 16:40:30.450880] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:31.699 [2024-12-07 16:40:30.451149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:14:31.699 [2024-12-07 16:40:30.451655] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:31.699 [2024-12-07 16:40:30.451671] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:31.699 [2024-12-07 16:40:30.451812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.699 "name": "raid_bdev1", 00:14:31.699 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:31.699 "strip_size_kb": 64, 00:14:31.699 "state": "online", 00:14:31.699 "raid_level": "raid5f", 00:14:31.699 "superblock": true, 00:14:31.699 "num_base_bdevs": 3, 00:14:31.699 "num_base_bdevs_discovered": 3, 00:14:31.699 "num_base_bdevs_operational": 3, 00:14:31.699 "base_bdevs_list": [ 00:14:31.699 { 00:14:31.699 "name": "spare", 00:14:31.699 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:31.699 "is_configured": true, 00:14:31.699 "data_offset": 2048, 00:14:31.699 "data_size": 63488 00:14:31.699 }, 00:14:31.699 { 00:14:31.699 "name": "BaseBdev2", 00:14:31.699 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:31.699 "is_configured": true, 00:14:31.699 "data_offset": 2048, 00:14:31.699 "data_size": 63488 00:14:31.699 }, 00:14:31.699 { 00:14:31.699 "name": "BaseBdev3", 00:14:31.699 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:31.699 "is_configured": true, 00:14:31.699 "data_offset": 2048, 00:14:31.699 "data_size": 63488 00:14:31.699 } 00:14:31.699 ] 00:14:31.699 }' 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.699 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.269 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.269 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.269 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.269 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.269 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.269 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.269 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.269 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.269 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.269 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.269 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.269 "name": "raid_bdev1", 00:14:32.269 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:32.269 "strip_size_kb": 64, 00:14:32.269 "state": "online", 00:14:32.270 "raid_level": "raid5f", 00:14:32.270 "superblock": true, 00:14:32.270 "num_base_bdevs": 3, 00:14:32.270 "num_base_bdevs_discovered": 3, 00:14:32.270 "num_base_bdevs_operational": 3, 00:14:32.270 "base_bdevs_list": [ 00:14:32.270 { 00:14:32.270 "name": "spare", 00:14:32.270 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:32.270 "is_configured": true, 00:14:32.270 "data_offset": 2048, 00:14:32.270 "data_size": 63488 00:14:32.270 }, 00:14:32.270 { 00:14:32.270 "name": "BaseBdev2", 00:14:32.270 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:32.270 "is_configured": true, 00:14:32.270 "data_offset": 2048, 00:14:32.270 "data_size": 63488 00:14:32.270 }, 00:14:32.270 { 00:14:32.270 "name": "BaseBdev3", 00:14:32.270 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:32.270 "is_configured": true, 00:14:32.270 "data_offset": 2048, 00:14:32.270 "data_size": 63488 00:14:32.270 } 00:14:32.270 ] 00:14:32.270 }' 00:14:32.270 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.270 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.270 16:40:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.270 [2024-12-07 16:40:31.070802] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.270 "name": "raid_bdev1", 00:14:32.270 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:32.270 "strip_size_kb": 64, 00:14:32.270 "state": "online", 00:14:32.270 "raid_level": "raid5f", 00:14:32.270 "superblock": true, 00:14:32.270 "num_base_bdevs": 3, 00:14:32.270 "num_base_bdevs_discovered": 2, 00:14:32.270 "num_base_bdevs_operational": 2, 00:14:32.270 "base_bdevs_list": [ 00:14:32.270 { 00:14:32.270 "name": null, 00:14:32.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.270 "is_configured": false, 00:14:32.270 "data_offset": 0, 00:14:32.270 "data_size": 63488 00:14:32.270 }, 00:14:32.270 { 00:14:32.270 "name": "BaseBdev2", 00:14:32.270 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:32.270 "is_configured": true, 00:14:32.270 "data_offset": 2048, 00:14:32.270 "data_size": 63488 00:14:32.270 }, 00:14:32.270 { 00:14:32.270 "name": "BaseBdev3", 00:14:32.270 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:32.270 "is_configured": true, 00:14:32.270 "data_offset": 2048, 00:14:32.270 "data_size": 63488 00:14:32.270 } 00:14:32.270 ] 00:14:32.270 }' 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.270 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.841 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:32.841 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.841 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.841 [2024-12-07 16:40:31.565958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.841 [2024-12-07 16:40:31.566180] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:32.841 [2024-12-07 16:40:31.566238] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:32.841 [2024-12-07 16:40:31.566301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.841 [2024-12-07 16:40:31.572665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:14:32.841 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.841 16:40:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:32.841 [2024-12-07 16:40:31.575107] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.780 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.780 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.780 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.780 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.780 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.780 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.780 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.780 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.780 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.780 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.780 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.780 "name": "raid_bdev1", 00:14:33.780 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:33.780 "strip_size_kb": 64, 00:14:33.780 "state": "online", 00:14:33.780 "raid_level": "raid5f", 00:14:33.780 "superblock": true, 00:14:33.780 "num_base_bdevs": 3, 00:14:33.780 "num_base_bdevs_discovered": 3, 00:14:33.780 "num_base_bdevs_operational": 3, 00:14:33.780 "process": { 00:14:33.780 "type": "rebuild", 00:14:33.780 "target": "spare", 00:14:33.780 "progress": { 00:14:33.780 "blocks": 20480, 00:14:33.780 "percent": 16 00:14:33.780 } 00:14:33.780 }, 00:14:33.780 "base_bdevs_list": [ 00:14:33.780 { 00:14:33.780 "name": "spare", 00:14:33.780 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:33.780 "is_configured": true, 00:14:33.780 "data_offset": 2048, 00:14:33.780 "data_size": 63488 00:14:33.780 }, 00:14:33.780 { 00:14:33.780 "name": "BaseBdev2", 00:14:33.780 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:33.780 "is_configured": true, 00:14:33.780 "data_offset": 2048, 00:14:33.780 "data_size": 63488 00:14:33.780 }, 00:14:33.780 { 00:14:33.780 "name": "BaseBdev3", 00:14:33.780 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:33.780 "is_configured": true, 00:14:33.780 "data_offset": 2048, 00:14:33.780 "data_size": 63488 00:14:33.780 } 00:14:33.780 ] 00:14:33.780 }' 00:14:33.780 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.041 [2024-12-07 16:40:32.734985] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.041 [2024-12-07 16:40:32.783081] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:34.041 [2024-12-07 16:40:32.783138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.041 [2024-12-07 16:40:32.783158] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.041 [2024-12-07 16:40:32.783166] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.041 "name": "raid_bdev1", 00:14:34.041 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:34.041 "strip_size_kb": 64, 00:14:34.041 "state": "online", 00:14:34.041 "raid_level": "raid5f", 00:14:34.041 "superblock": true, 00:14:34.041 "num_base_bdevs": 3, 00:14:34.041 "num_base_bdevs_discovered": 2, 00:14:34.041 "num_base_bdevs_operational": 2, 00:14:34.041 "base_bdevs_list": [ 00:14:34.041 { 00:14:34.041 "name": null, 00:14:34.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.041 "is_configured": false, 00:14:34.041 "data_offset": 0, 00:14:34.041 "data_size": 63488 00:14:34.041 }, 00:14:34.041 { 00:14:34.041 "name": "BaseBdev2", 00:14:34.041 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:34.041 "is_configured": true, 00:14:34.041 "data_offset": 2048, 00:14:34.041 "data_size": 63488 00:14:34.041 }, 00:14:34.041 { 00:14:34.041 "name": "BaseBdev3", 00:14:34.041 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:34.041 "is_configured": true, 00:14:34.041 "data_offset": 2048, 00:14:34.041 "data_size": 63488 00:14:34.041 } 00:14:34.041 ] 00:14:34.041 }' 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.041 16:40:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.611 16:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:34.611 16:40:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.611 16:40:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.611 [2024-12-07 16:40:33.251092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:34.611 [2024-12-07 16:40:33.251197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.611 [2024-12-07 16:40:33.251237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:34.611 [2024-12-07 16:40:33.251263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.611 [2024-12-07 16:40:33.251830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.611 [2024-12-07 16:40:33.251887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:34.611 [2024-12-07 16:40:33.252010] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:34.611 [2024-12-07 16:40:33.252050] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:34.611 [2024-12-07 16:40:33.252095] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:34.611 [2024-12-07 16:40:33.252194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.611 [2024-12-07 16:40:33.257931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:14:34.611 spare 00:14:34.611 16:40:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.612 16:40:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:34.612 [2024-12-07 16:40:33.260424] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:35.550 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.550 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.550 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.550 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.550 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.550 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.550 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.550 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.550 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.550 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.550 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.550 "name": "raid_bdev1", 00:14:35.550 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:35.550 "strip_size_kb": 64, 00:14:35.550 "state": "online", 00:14:35.550 "raid_level": "raid5f", 00:14:35.550 "superblock": true, 00:14:35.550 "num_base_bdevs": 3, 00:14:35.550 "num_base_bdevs_discovered": 3, 00:14:35.550 "num_base_bdevs_operational": 3, 00:14:35.550 "process": { 00:14:35.550 "type": "rebuild", 00:14:35.550 "target": "spare", 00:14:35.550 "progress": { 00:14:35.550 "blocks": 20480, 00:14:35.550 "percent": 16 00:14:35.550 } 00:14:35.550 }, 00:14:35.550 "base_bdevs_list": [ 00:14:35.550 { 00:14:35.550 "name": "spare", 00:14:35.550 "uuid": "7e75473c-0535-5702-af24-1b480b0ba7c3", 00:14:35.550 "is_configured": true, 00:14:35.550 "data_offset": 2048, 00:14:35.551 "data_size": 63488 00:14:35.551 }, 00:14:35.551 { 00:14:35.551 "name": "BaseBdev2", 00:14:35.551 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:35.551 "is_configured": true, 00:14:35.551 "data_offset": 2048, 00:14:35.551 "data_size": 63488 00:14:35.551 }, 00:14:35.551 { 00:14:35.551 "name": "BaseBdev3", 00:14:35.551 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:35.551 "is_configured": true, 00:14:35.551 "data_offset": 2048, 00:14:35.551 "data_size": 63488 00:14:35.551 } 00:14:35.551 ] 00:14:35.551 }' 00:14:35.551 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.551 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.551 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.551 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.551 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:35.551 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.551 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.551 [2024-12-07 16:40:34.424248] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.810 [2024-12-07 16:40:34.468366] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:35.810 [2024-12-07 16:40:34.468423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.810 [2024-12-07 16:40:34.468439] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.810 [2024-12-07 16:40:34.468453] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.810 "name": "raid_bdev1", 00:14:35.810 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:35.810 "strip_size_kb": 64, 00:14:35.810 "state": "online", 00:14:35.810 "raid_level": "raid5f", 00:14:35.810 "superblock": true, 00:14:35.810 "num_base_bdevs": 3, 00:14:35.810 "num_base_bdevs_discovered": 2, 00:14:35.810 "num_base_bdevs_operational": 2, 00:14:35.810 "base_bdevs_list": [ 00:14:35.810 { 00:14:35.810 "name": null, 00:14:35.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.810 "is_configured": false, 00:14:35.810 "data_offset": 0, 00:14:35.810 "data_size": 63488 00:14:35.810 }, 00:14:35.810 { 00:14:35.810 "name": "BaseBdev2", 00:14:35.810 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:35.810 "is_configured": true, 00:14:35.810 "data_offset": 2048, 00:14:35.810 "data_size": 63488 00:14:35.810 }, 00:14:35.810 { 00:14:35.810 "name": "BaseBdev3", 00:14:35.810 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:35.810 "is_configured": true, 00:14:35.810 "data_offset": 2048, 00:14:35.810 "data_size": 63488 00:14:35.810 } 00:14:35.810 ] 00:14:35.810 }' 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.810 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.378 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:36.378 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.378 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:36.378 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:36.378 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.378 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.378 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.378 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.378 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.378 16:40:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.378 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.378 "name": "raid_bdev1", 00:14:36.379 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:36.379 "strip_size_kb": 64, 00:14:36.379 "state": "online", 00:14:36.379 "raid_level": "raid5f", 00:14:36.379 "superblock": true, 00:14:36.379 "num_base_bdevs": 3, 00:14:36.379 "num_base_bdevs_discovered": 2, 00:14:36.379 "num_base_bdevs_operational": 2, 00:14:36.379 "base_bdevs_list": [ 00:14:36.379 { 00:14:36.379 "name": null, 00:14:36.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.379 "is_configured": false, 00:14:36.379 "data_offset": 0, 00:14:36.379 "data_size": 63488 00:14:36.379 }, 00:14:36.379 { 00:14:36.379 "name": "BaseBdev2", 00:14:36.379 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:36.379 "is_configured": true, 00:14:36.379 "data_offset": 2048, 00:14:36.379 "data_size": 63488 00:14:36.379 }, 00:14:36.379 { 00:14:36.379 "name": "BaseBdev3", 00:14:36.379 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:36.379 "is_configured": true, 00:14:36.379 "data_offset": 2048, 00:14:36.379 "data_size": 63488 00:14:36.379 } 00:14:36.379 ] 00:14:36.379 }' 00:14:36.379 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.379 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:36.379 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.379 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:36.379 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:36.379 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.379 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.379 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.379 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:36.379 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.379 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.379 [2024-12-07 16:40:35.123788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:36.379 [2024-12-07 16:40:35.123889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.379 [2024-12-07 16:40:35.123931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:36.379 [2024-12-07 16:40:35.123963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.379 [2024-12-07 16:40:35.124443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.379 [2024-12-07 16:40:35.124514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.379 [2024-12-07 16:40:35.124626] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:36.379 [2024-12-07 16:40:35.124648] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:36.379 [2024-12-07 16:40:35.124657] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:36.379 [2024-12-07 16:40:35.124670] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:36.379 BaseBdev1 00:14:36.379 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.379 16:40:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.345 "name": "raid_bdev1", 00:14:37.345 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:37.345 "strip_size_kb": 64, 00:14:37.345 "state": "online", 00:14:37.345 "raid_level": "raid5f", 00:14:37.345 "superblock": true, 00:14:37.345 "num_base_bdevs": 3, 00:14:37.345 "num_base_bdevs_discovered": 2, 00:14:37.345 "num_base_bdevs_operational": 2, 00:14:37.345 "base_bdevs_list": [ 00:14:37.345 { 00:14:37.345 "name": null, 00:14:37.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.345 "is_configured": false, 00:14:37.345 "data_offset": 0, 00:14:37.345 "data_size": 63488 00:14:37.345 }, 00:14:37.345 { 00:14:37.345 "name": "BaseBdev2", 00:14:37.345 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:37.345 "is_configured": true, 00:14:37.345 "data_offset": 2048, 00:14:37.345 "data_size": 63488 00:14:37.345 }, 00:14:37.345 { 00:14:37.345 "name": "BaseBdev3", 00:14:37.345 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:37.345 "is_configured": true, 00:14:37.345 "data_offset": 2048, 00:14:37.345 "data_size": 63488 00:14:37.345 } 00:14:37.345 ] 00:14:37.345 }' 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.345 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.916 "name": "raid_bdev1", 00:14:37.916 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:37.916 "strip_size_kb": 64, 00:14:37.916 "state": "online", 00:14:37.916 "raid_level": "raid5f", 00:14:37.916 "superblock": true, 00:14:37.916 "num_base_bdevs": 3, 00:14:37.916 "num_base_bdevs_discovered": 2, 00:14:37.916 "num_base_bdevs_operational": 2, 00:14:37.916 "base_bdevs_list": [ 00:14:37.916 { 00:14:37.916 "name": null, 00:14:37.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.916 "is_configured": false, 00:14:37.916 "data_offset": 0, 00:14:37.916 "data_size": 63488 00:14:37.916 }, 00:14:37.916 { 00:14:37.916 "name": "BaseBdev2", 00:14:37.916 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:37.916 "is_configured": true, 00:14:37.916 "data_offset": 2048, 00:14:37.916 "data_size": 63488 00:14:37.916 }, 00:14:37.916 { 00:14:37.916 "name": "BaseBdev3", 00:14:37.916 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:37.916 "is_configured": true, 00:14:37.916 "data_offset": 2048, 00:14:37.916 "data_size": 63488 00:14:37.916 } 00:14:37.916 ] 00:14:37.916 }' 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.916 [2024-12-07 16:40:36.721520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.916 [2024-12-07 16:40:36.721705] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:37.916 [2024-12-07 16:40:36.721719] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:37.916 request: 00:14:37.916 { 00:14:37.916 "base_bdev": "BaseBdev1", 00:14:37.916 "raid_bdev": "raid_bdev1", 00:14:37.916 "method": "bdev_raid_add_base_bdev", 00:14:37.916 "req_id": 1 00:14:37.916 } 00:14:37.916 Got JSON-RPC error response 00:14:37.916 response: 00:14:37.916 { 00:14:37.916 "code": -22, 00:14:37.916 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:37.916 } 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:37.916 16:40:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:38.858 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:38.858 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.858 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.859 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.859 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.859 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.859 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.859 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.859 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.859 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.859 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.859 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.859 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.859 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.119 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.119 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.119 "name": "raid_bdev1", 00:14:39.119 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:39.119 "strip_size_kb": 64, 00:14:39.119 "state": "online", 00:14:39.119 "raid_level": "raid5f", 00:14:39.119 "superblock": true, 00:14:39.119 "num_base_bdevs": 3, 00:14:39.119 "num_base_bdevs_discovered": 2, 00:14:39.119 "num_base_bdevs_operational": 2, 00:14:39.119 "base_bdevs_list": [ 00:14:39.119 { 00:14:39.119 "name": null, 00:14:39.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.119 "is_configured": false, 00:14:39.119 "data_offset": 0, 00:14:39.119 "data_size": 63488 00:14:39.119 }, 00:14:39.119 { 00:14:39.119 "name": "BaseBdev2", 00:14:39.119 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:39.119 "is_configured": true, 00:14:39.119 "data_offset": 2048, 00:14:39.119 "data_size": 63488 00:14:39.119 }, 00:14:39.119 { 00:14:39.119 "name": "BaseBdev3", 00:14:39.119 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:39.119 "is_configured": true, 00:14:39.119 "data_offset": 2048, 00:14:39.119 "data_size": 63488 00:14:39.119 } 00:14:39.119 ] 00:14:39.119 }' 00:14:39.119 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.119 16:40:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.378 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.378 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.378 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.378 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.378 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.378 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.378 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.378 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.378 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.378 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.378 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.378 "name": "raid_bdev1", 00:14:39.378 "uuid": "aa564a09-94ac-4ca3-b744-3ed99332d9d7", 00:14:39.378 "strip_size_kb": 64, 00:14:39.378 "state": "online", 00:14:39.378 "raid_level": "raid5f", 00:14:39.378 "superblock": true, 00:14:39.378 "num_base_bdevs": 3, 00:14:39.378 "num_base_bdevs_discovered": 2, 00:14:39.378 "num_base_bdevs_operational": 2, 00:14:39.378 "base_bdevs_list": [ 00:14:39.378 { 00:14:39.378 "name": null, 00:14:39.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.378 "is_configured": false, 00:14:39.378 "data_offset": 0, 00:14:39.378 "data_size": 63488 00:14:39.378 }, 00:14:39.378 { 00:14:39.378 "name": "BaseBdev2", 00:14:39.378 "uuid": "5020718d-74df-5122-a8be-36b528c52252", 00:14:39.378 "is_configured": true, 00:14:39.378 "data_offset": 2048, 00:14:39.378 "data_size": 63488 00:14:39.378 }, 00:14:39.378 { 00:14:39.378 "name": "BaseBdev3", 00:14:39.378 "uuid": "8f1aa299-74bd-5c2c-aa27-2d4780a8eab3", 00:14:39.378 "is_configured": true, 00:14:39.378 "data_offset": 2048, 00:14:39.378 "data_size": 63488 00:14:39.378 } 00:14:39.378 ] 00:14:39.378 }' 00:14:39.378 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92834 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92834 ']' 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92834 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92834 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92834' 00:14:39.639 killing process with pid 92834 00:14:39.639 Received shutdown signal, test time was about 60.000000 seconds 00:14:39.639 00:14:39.639 Latency(us) 00:14:39.639 [2024-12-07T16:40:38.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.639 [2024-12-07T16:40:38.538Z] =================================================================================================================== 00:14:39.639 [2024-12-07T16:40:38.538Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92834 00:14:39.639 [2024-12-07 16:40:38.366414] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.639 [2024-12-07 16:40:38.366560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.639 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92834 00:14:39.639 [2024-12-07 16:40:38.366633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.639 [2024-12-07 16:40:38.366644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:39.639 [2024-12-07 16:40:38.438502] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.899 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:39.899 00:14:39.899 real 0m21.882s 00:14:39.899 user 0m28.302s 00:14:39.899 sys 0m2.864s 00:14:39.899 ************************************ 00:14:39.899 END TEST raid5f_rebuild_test_sb 00:14:39.899 ************************************ 00:14:39.899 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:39.899 16:40:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.159 16:40:38 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:40.159 16:40:38 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:40.159 16:40:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:40.159 16:40:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.159 16:40:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:40.159 ************************************ 00:14:40.159 START TEST raid5f_state_function_test 00:14:40.159 ************************************ 00:14:40.159 16:40:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:14:40.159 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:40.159 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:40.159 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:40.159 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:40.159 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:40.159 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.159 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:40.159 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.159 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.159 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:40.159 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.159 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93571 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:40.160 Process raid pid: 93571 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93571' 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93571 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93571 ']' 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:40.160 16:40:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.160 [2024-12-07 16:40:38.966495] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:40.160 [2024-12-07 16:40:38.966701] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.420 [2024-12-07 16:40:39.127802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.420 [2024-12-07 16:40:39.195759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.420 [2024-12-07 16:40:39.270550] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.420 [2024-12-07 16:40:39.270665] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.990 [2024-12-07 16:40:39.789412] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.990 [2024-12-07 16:40:39.789472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.990 [2024-12-07 16:40:39.789492] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.990 [2024-12-07 16:40:39.789505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.990 [2024-12-07 16:40:39.789511] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:40.990 [2024-12-07 16:40:39.789523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:40.990 [2024-12-07 16:40:39.789529] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:40.990 [2024-12-07 16:40:39.789537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.990 "name": "Existed_Raid", 00:14:40.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.990 "strip_size_kb": 64, 00:14:40.990 "state": "configuring", 00:14:40.990 "raid_level": "raid5f", 00:14:40.990 "superblock": false, 00:14:40.990 "num_base_bdevs": 4, 00:14:40.990 "num_base_bdevs_discovered": 0, 00:14:40.990 "num_base_bdevs_operational": 4, 00:14:40.990 "base_bdevs_list": [ 00:14:40.990 { 00:14:40.990 "name": "BaseBdev1", 00:14:40.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.990 "is_configured": false, 00:14:40.990 "data_offset": 0, 00:14:40.990 "data_size": 0 00:14:40.990 }, 00:14:40.990 { 00:14:40.990 "name": "BaseBdev2", 00:14:40.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.990 "is_configured": false, 00:14:40.990 "data_offset": 0, 00:14:40.990 "data_size": 0 00:14:40.990 }, 00:14:40.990 { 00:14:40.990 "name": "BaseBdev3", 00:14:40.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.990 "is_configured": false, 00:14:40.990 "data_offset": 0, 00:14:40.990 "data_size": 0 00:14:40.990 }, 00:14:40.990 { 00:14:40.990 "name": "BaseBdev4", 00:14:40.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.990 "is_configured": false, 00:14:40.990 "data_offset": 0, 00:14:40.990 "data_size": 0 00:14:40.990 } 00:14:40.990 ] 00:14:40.990 }' 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.990 16:40:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.560 [2024-12-07 16:40:40.212576] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:41.560 [2024-12-07 16:40:40.212674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.560 [2024-12-07 16:40:40.224590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.560 [2024-12-07 16:40:40.224665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.560 [2024-12-07 16:40:40.224692] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.560 [2024-12-07 16:40:40.224715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.560 [2024-12-07 16:40:40.224732] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:41.560 [2024-12-07 16:40:40.224752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:41.560 [2024-12-07 16:40:40.224769] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:41.560 [2024-12-07 16:40:40.224799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.560 [2024-12-07 16:40:40.251453] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.560 BaseBdev1 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:41.560 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.561 [ 00:14:41.561 { 00:14:41.561 "name": "BaseBdev1", 00:14:41.561 "aliases": [ 00:14:41.561 "118d7216-002f-46f7-ad66-d2c1af71830f" 00:14:41.561 ], 00:14:41.561 "product_name": "Malloc disk", 00:14:41.561 "block_size": 512, 00:14:41.561 "num_blocks": 65536, 00:14:41.561 "uuid": "118d7216-002f-46f7-ad66-d2c1af71830f", 00:14:41.561 "assigned_rate_limits": { 00:14:41.561 "rw_ios_per_sec": 0, 00:14:41.561 "rw_mbytes_per_sec": 0, 00:14:41.561 "r_mbytes_per_sec": 0, 00:14:41.561 "w_mbytes_per_sec": 0 00:14:41.561 }, 00:14:41.561 "claimed": true, 00:14:41.561 "claim_type": "exclusive_write", 00:14:41.561 "zoned": false, 00:14:41.561 "supported_io_types": { 00:14:41.561 "read": true, 00:14:41.561 "write": true, 00:14:41.561 "unmap": true, 00:14:41.561 "flush": true, 00:14:41.561 "reset": true, 00:14:41.561 "nvme_admin": false, 00:14:41.561 "nvme_io": false, 00:14:41.561 "nvme_io_md": false, 00:14:41.561 "write_zeroes": true, 00:14:41.561 "zcopy": true, 00:14:41.561 "get_zone_info": false, 00:14:41.561 "zone_management": false, 00:14:41.561 "zone_append": false, 00:14:41.561 "compare": false, 00:14:41.561 "compare_and_write": false, 00:14:41.561 "abort": true, 00:14:41.561 "seek_hole": false, 00:14:41.561 "seek_data": false, 00:14:41.561 "copy": true, 00:14:41.561 "nvme_iov_md": false 00:14:41.561 }, 00:14:41.561 "memory_domains": [ 00:14:41.561 { 00:14:41.561 "dma_device_id": "system", 00:14:41.561 "dma_device_type": 1 00:14:41.561 }, 00:14:41.561 { 00:14:41.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.561 "dma_device_type": 2 00:14:41.561 } 00:14:41.561 ], 00:14:41.561 "driver_specific": {} 00:14:41.561 } 00:14:41.561 ] 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.561 "name": "Existed_Raid", 00:14:41.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.561 "strip_size_kb": 64, 00:14:41.561 "state": "configuring", 00:14:41.561 "raid_level": "raid5f", 00:14:41.561 "superblock": false, 00:14:41.561 "num_base_bdevs": 4, 00:14:41.561 "num_base_bdevs_discovered": 1, 00:14:41.561 "num_base_bdevs_operational": 4, 00:14:41.561 "base_bdevs_list": [ 00:14:41.561 { 00:14:41.561 "name": "BaseBdev1", 00:14:41.561 "uuid": "118d7216-002f-46f7-ad66-d2c1af71830f", 00:14:41.561 "is_configured": true, 00:14:41.561 "data_offset": 0, 00:14:41.561 "data_size": 65536 00:14:41.561 }, 00:14:41.561 { 00:14:41.561 "name": "BaseBdev2", 00:14:41.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.561 "is_configured": false, 00:14:41.561 "data_offset": 0, 00:14:41.561 "data_size": 0 00:14:41.561 }, 00:14:41.561 { 00:14:41.561 "name": "BaseBdev3", 00:14:41.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.561 "is_configured": false, 00:14:41.561 "data_offset": 0, 00:14:41.561 "data_size": 0 00:14:41.561 }, 00:14:41.561 { 00:14:41.561 "name": "BaseBdev4", 00:14:41.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.561 "is_configured": false, 00:14:41.561 "data_offset": 0, 00:14:41.561 "data_size": 0 00:14:41.561 } 00:14:41.561 ] 00:14:41.561 }' 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.561 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.133 [2024-12-07 16:40:40.739506] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.133 [2024-12-07 16:40:40.739598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.133 [2024-12-07 16:40:40.751551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.133 [2024-12-07 16:40:40.753626] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.133 [2024-12-07 16:40:40.753664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.133 [2024-12-07 16:40:40.753673] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.133 [2024-12-07 16:40:40.753682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.133 [2024-12-07 16:40:40.753688] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:42.133 [2024-12-07 16:40:40.753696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.133 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.133 "name": "Existed_Raid", 00:14:42.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.133 "strip_size_kb": 64, 00:14:42.133 "state": "configuring", 00:14:42.133 "raid_level": "raid5f", 00:14:42.133 "superblock": false, 00:14:42.133 "num_base_bdevs": 4, 00:14:42.133 "num_base_bdevs_discovered": 1, 00:14:42.133 "num_base_bdevs_operational": 4, 00:14:42.133 "base_bdevs_list": [ 00:14:42.134 { 00:14:42.134 "name": "BaseBdev1", 00:14:42.134 "uuid": "118d7216-002f-46f7-ad66-d2c1af71830f", 00:14:42.134 "is_configured": true, 00:14:42.134 "data_offset": 0, 00:14:42.134 "data_size": 65536 00:14:42.134 }, 00:14:42.134 { 00:14:42.134 "name": "BaseBdev2", 00:14:42.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.134 "is_configured": false, 00:14:42.134 "data_offset": 0, 00:14:42.134 "data_size": 0 00:14:42.134 }, 00:14:42.134 { 00:14:42.134 "name": "BaseBdev3", 00:14:42.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.134 "is_configured": false, 00:14:42.134 "data_offset": 0, 00:14:42.134 "data_size": 0 00:14:42.134 }, 00:14:42.134 { 00:14:42.134 "name": "BaseBdev4", 00:14:42.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.134 "is_configured": false, 00:14:42.134 "data_offset": 0, 00:14:42.134 "data_size": 0 00:14:42.134 } 00:14:42.134 ] 00:14:42.134 }' 00:14:42.134 16:40:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.134 16:40:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.395 [2024-12-07 16:40:41.187244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:42.395 BaseBdev2 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.395 [ 00:14:42.395 { 00:14:42.395 "name": "BaseBdev2", 00:14:42.395 "aliases": [ 00:14:42.395 "de76a67b-12b0-4dd8-b0f1-054b56fec9a9" 00:14:42.395 ], 00:14:42.395 "product_name": "Malloc disk", 00:14:42.395 "block_size": 512, 00:14:42.395 "num_blocks": 65536, 00:14:42.395 "uuid": "de76a67b-12b0-4dd8-b0f1-054b56fec9a9", 00:14:42.395 "assigned_rate_limits": { 00:14:42.395 "rw_ios_per_sec": 0, 00:14:42.395 "rw_mbytes_per_sec": 0, 00:14:42.395 "r_mbytes_per_sec": 0, 00:14:42.395 "w_mbytes_per_sec": 0 00:14:42.395 }, 00:14:42.395 "claimed": true, 00:14:42.395 "claim_type": "exclusive_write", 00:14:42.395 "zoned": false, 00:14:42.395 "supported_io_types": { 00:14:42.395 "read": true, 00:14:42.395 "write": true, 00:14:42.395 "unmap": true, 00:14:42.395 "flush": true, 00:14:42.395 "reset": true, 00:14:42.395 "nvme_admin": false, 00:14:42.395 "nvme_io": false, 00:14:42.395 "nvme_io_md": false, 00:14:42.395 "write_zeroes": true, 00:14:42.395 "zcopy": true, 00:14:42.395 "get_zone_info": false, 00:14:42.395 "zone_management": false, 00:14:42.395 "zone_append": false, 00:14:42.395 "compare": false, 00:14:42.395 "compare_and_write": false, 00:14:42.395 "abort": true, 00:14:42.395 "seek_hole": false, 00:14:42.395 "seek_data": false, 00:14:42.395 "copy": true, 00:14:42.395 "nvme_iov_md": false 00:14:42.395 }, 00:14:42.395 "memory_domains": [ 00:14:42.395 { 00:14:42.395 "dma_device_id": "system", 00:14:42.395 "dma_device_type": 1 00:14:42.395 }, 00:14:42.395 { 00:14:42.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.395 "dma_device_type": 2 00:14:42.395 } 00:14:42.395 ], 00:14:42.395 "driver_specific": {} 00:14:42.395 } 00:14:42.395 ] 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.395 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.396 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.396 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.396 "name": "Existed_Raid", 00:14:42.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.396 "strip_size_kb": 64, 00:14:42.396 "state": "configuring", 00:14:42.396 "raid_level": "raid5f", 00:14:42.396 "superblock": false, 00:14:42.396 "num_base_bdevs": 4, 00:14:42.396 "num_base_bdevs_discovered": 2, 00:14:42.396 "num_base_bdevs_operational": 4, 00:14:42.396 "base_bdevs_list": [ 00:14:42.396 { 00:14:42.396 "name": "BaseBdev1", 00:14:42.396 "uuid": "118d7216-002f-46f7-ad66-d2c1af71830f", 00:14:42.396 "is_configured": true, 00:14:42.396 "data_offset": 0, 00:14:42.396 "data_size": 65536 00:14:42.396 }, 00:14:42.396 { 00:14:42.396 "name": "BaseBdev2", 00:14:42.396 "uuid": "de76a67b-12b0-4dd8-b0f1-054b56fec9a9", 00:14:42.396 "is_configured": true, 00:14:42.396 "data_offset": 0, 00:14:42.396 "data_size": 65536 00:14:42.396 }, 00:14:42.396 { 00:14:42.396 "name": "BaseBdev3", 00:14:42.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.396 "is_configured": false, 00:14:42.396 "data_offset": 0, 00:14:42.396 "data_size": 0 00:14:42.396 }, 00:14:42.396 { 00:14:42.396 "name": "BaseBdev4", 00:14:42.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.396 "is_configured": false, 00:14:42.396 "data_offset": 0, 00:14:42.396 "data_size": 0 00:14:42.396 } 00:14:42.396 ] 00:14:42.396 }' 00:14:42.396 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.396 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.966 [2024-12-07 16:40:41.695082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.966 BaseBdev3 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.966 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.966 [ 00:14:42.966 { 00:14:42.966 "name": "BaseBdev3", 00:14:42.966 "aliases": [ 00:14:42.966 "fb55d7e2-9873-420c-8a8f-ba2a1ae9e459" 00:14:42.966 ], 00:14:42.966 "product_name": "Malloc disk", 00:14:42.966 "block_size": 512, 00:14:42.966 "num_blocks": 65536, 00:14:42.966 "uuid": "fb55d7e2-9873-420c-8a8f-ba2a1ae9e459", 00:14:42.966 "assigned_rate_limits": { 00:14:42.966 "rw_ios_per_sec": 0, 00:14:42.966 "rw_mbytes_per_sec": 0, 00:14:42.966 "r_mbytes_per_sec": 0, 00:14:42.966 "w_mbytes_per_sec": 0 00:14:42.966 }, 00:14:42.966 "claimed": true, 00:14:42.966 "claim_type": "exclusive_write", 00:14:42.966 "zoned": false, 00:14:42.966 "supported_io_types": { 00:14:42.966 "read": true, 00:14:42.966 "write": true, 00:14:42.966 "unmap": true, 00:14:42.966 "flush": true, 00:14:42.966 "reset": true, 00:14:42.966 "nvme_admin": false, 00:14:42.966 "nvme_io": false, 00:14:42.966 "nvme_io_md": false, 00:14:42.966 "write_zeroes": true, 00:14:42.966 "zcopy": true, 00:14:42.966 "get_zone_info": false, 00:14:42.967 "zone_management": false, 00:14:42.967 "zone_append": false, 00:14:42.967 "compare": false, 00:14:42.967 "compare_and_write": false, 00:14:42.967 "abort": true, 00:14:42.967 "seek_hole": false, 00:14:42.967 "seek_data": false, 00:14:42.967 "copy": true, 00:14:42.967 "nvme_iov_md": false 00:14:42.967 }, 00:14:42.967 "memory_domains": [ 00:14:42.967 { 00:14:42.967 "dma_device_id": "system", 00:14:42.967 "dma_device_type": 1 00:14:42.967 }, 00:14:42.967 { 00:14:42.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.967 "dma_device_type": 2 00:14:42.967 } 00:14:42.967 ], 00:14:42.967 "driver_specific": {} 00:14:42.967 } 00:14:42.967 ] 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.967 "name": "Existed_Raid", 00:14:42.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.967 "strip_size_kb": 64, 00:14:42.967 "state": "configuring", 00:14:42.967 "raid_level": "raid5f", 00:14:42.967 "superblock": false, 00:14:42.967 "num_base_bdevs": 4, 00:14:42.967 "num_base_bdevs_discovered": 3, 00:14:42.967 "num_base_bdevs_operational": 4, 00:14:42.967 "base_bdevs_list": [ 00:14:42.967 { 00:14:42.967 "name": "BaseBdev1", 00:14:42.967 "uuid": "118d7216-002f-46f7-ad66-d2c1af71830f", 00:14:42.967 "is_configured": true, 00:14:42.967 "data_offset": 0, 00:14:42.967 "data_size": 65536 00:14:42.967 }, 00:14:42.967 { 00:14:42.967 "name": "BaseBdev2", 00:14:42.967 "uuid": "de76a67b-12b0-4dd8-b0f1-054b56fec9a9", 00:14:42.967 "is_configured": true, 00:14:42.967 "data_offset": 0, 00:14:42.967 "data_size": 65536 00:14:42.967 }, 00:14:42.967 { 00:14:42.967 "name": "BaseBdev3", 00:14:42.967 "uuid": "fb55d7e2-9873-420c-8a8f-ba2a1ae9e459", 00:14:42.967 "is_configured": true, 00:14:42.967 "data_offset": 0, 00:14:42.967 "data_size": 65536 00:14:42.967 }, 00:14:42.967 { 00:14:42.967 "name": "BaseBdev4", 00:14:42.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.967 "is_configured": false, 00:14:42.967 "data_offset": 0, 00:14:42.967 "data_size": 0 00:14:42.967 } 00:14:42.967 ] 00:14:42.967 }' 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.967 16:40:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.536 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:43.536 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.536 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.536 [2024-12-07 16:40:42.182998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:43.536 [2024-12-07 16:40:42.183062] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:43.536 [2024-12-07 16:40:42.183078] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:43.536 [2024-12-07 16:40:42.183410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:43.536 [2024-12-07 16:40:42.183894] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:43.536 [2024-12-07 16:40:42.183920] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:43.536 [2024-12-07 16:40:42.184157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.536 BaseBdev4 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.537 [ 00:14:43.537 { 00:14:43.537 "name": "BaseBdev4", 00:14:43.537 "aliases": [ 00:14:43.537 "2a069b82-5a2f-4ec1-8a9a-339d40198c4b" 00:14:43.537 ], 00:14:43.537 "product_name": "Malloc disk", 00:14:43.537 "block_size": 512, 00:14:43.537 "num_blocks": 65536, 00:14:43.537 "uuid": "2a069b82-5a2f-4ec1-8a9a-339d40198c4b", 00:14:43.537 "assigned_rate_limits": { 00:14:43.537 "rw_ios_per_sec": 0, 00:14:43.537 "rw_mbytes_per_sec": 0, 00:14:43.537 "r_mbytes_per_sec": 0, 00:14:43.537 "w_mbytes_per_sec": 0 00:14:43.537 }, 00:14:43.537 "claimed": true, 00:14:43.537 "claim_type": "exclusive_write", 00:14:43.537 "zoned": false, 00:14:43.537 "supported_io_types": { 00:14:43.537 "read": true, 00:14:43.537 "write": true, 00:14:43.537 "unmap": true, 00:14:43.537 "flush": true, 00:14:43.537 "reset": true, 00:14:43.537 "nvme_admin": false, 00:14:43.537 "nvme_io": false, 00:14:43.537 "nvme_io_md": false, 00:14:43.537 "write_zeroes": true, 00:14:43.537 "zcopy": true, 00:14:43.537 "get_zone_info": false, 00:14:43.537 "zone_management": false, 00:14:43.537 "zone_append": false, 00:14:43.537 "compare": false, 00:14:43.537 "compare_and_write": false, 00:14:43.537 "abort": true, 00:14:43.537 "seek_hole": false, 00:14:43.537 "seek_data": false, 00:14:43.537 "copy": true, 00:14:43.537 "nvme_iov_md": false 00:14:43.537 }, 00:14:43.537 "memory_domains": [ 00:14:43.537 { 00:14:43.537 "dma_device_id": "system", 00:14:43.537 "dma_device_type": 1 00:14:43.537 }, 00:14:43.537 { 00:14:43.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.537 "dma_device_type": 2 00:14:43.537 } 00:14:43.537 ], 00:14:43.537 "driver_specific": {} 00:14:43.537 } 00:14:43.537 ] 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.537 "name": "Existed_Raid", 00:14:43.537 "uuid": "1f303c7b-0859-4fc5-9367-2a36f3007890", 00:14:43.537 "strip_size_kb": 64, 00:14:43.537 "state": "online", 00:14:43.537 "raid_level": "raid5f", 00:14:43.537 "superblock": false, 00:14:43.537 "num_base_bdevs": 4, 00:14:43.537 "num_base_bdevs_discovered": 4, 00:14:43.537 "num_base_bdevs_operational": 4, 00:14:43.537 "base_bdevs_list": [ 00:14:43.537 { 00:14:43.537 "name": "BaseBdev1", 00:14:43.537 "uuid": "118d7216-002f-46f7-ad66-d2c1af71830f", 00:14:43.537 "is_configured": true, 00:14:43.537 "data_offset": 0, 00:14:43.537 "data_size": 65536 00:14:43.537 }, 00:14:43.537 { 00:14:43.537 "name": "BaseBdev2", 00:14:43.537 "uuid": "de76a67b-12b0-4dd8-b0f1-054b56fec9a9", 00:14:43.537 "is_configured": true, 00:14:43.537 "data_offset": 0, 00:14:43.537 "data_size": 65536 00:14:43.537 }, 00:14:43.537 { 00:14:43.537 "name": "BaseBdev3", 00:14:43.537 "uuid": "fb55d7e2-9873-420c-8a8f-ba2a1ae9e459", 00:14:43.537 "is_configured": true, 00:14:43.537 "data_offset": 0, 00:14:43.537 "data_size": 65536 00:14:43.537 }, 00:14:43.537 { 00:14:43.537 "name": "BaseBdev4", 00:14:43.537 "uuid": "2a069b82-5a2f-4ec1-8a9a-339d40198c4b", 00:14:43.537 "is_configured": true, 00:14:43.537 "data_offset": 0, 00:14:43.537 "data_size": 65536 00:14:43.537 } 00:14:43.537 ] 00:14:43.537 }' 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.537 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.107 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:44.107 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:44.107 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:44.107 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:44.107 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:44.107 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:44.107 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:44.107 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:44.107 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.107 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.107 [2024-12-07 16:40:42.714378] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.107 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.107 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:44.107 "name": "Existed_Raid", 00:14:44.107 "aliases": [ 00:14:44.107 "1f303c7b-0859-4fc5-9367-2a36f3007890" 00:14:44.107 ], 00:14:44.107 "product_name": "Raid Volume", 00:14:44.107 "block_size": 512, 00:14:44.107 "num_blocks": 196608, 00:14:44.107 "uuid": "1f303c7b-0859-4fc5-9367-2a36f3007890", 00:14:44.107 "assigned_rate_limits": { 00:14:44.107 "rw_ios_per_sec": 0, 00:14:44.107 "rw_mbytes_per_sec": 0, 00:14:44.107 "r_mbytes_per_sec": 0, 00:14:44.107 "w_mbytes_per_sec": 0 00:14:44.107 }, 00:14:44.107 "claimed": false, 00:14:44.107 "zoned": false, 00:14:44.107 "supported_io_types": { 00:14:44.107 "read": true, 00:14:44.107 "write": true, 00:14:44.107 "unmap": false, 00:14:44.107 "flush": false, 00:14:44.107 "reset": true, 00:14:44.107 "nvme_admin": false, 00:14:44.107 "nvme_io": false, 00:14:44.107 "nvme_io_md": false, 00:14:44.107 "write_zeroes": true, 00:14:44.107 "zcopy": false, 00:14:44.107 "get_zone_info": false, 00:14:44.107 "zone_management": false, 00:14:44.107 "zone_append": false, 00:14:44.107 "compare": false, 00:14:44.107 "compare_and_write": false, 00:14:44.107 "abort": false, 00:14:44.107 "seek_hole": false, 00:14:44.107 "seek_data": false, 00:14:44.107 "copy": false, 00:14:44.107 "nvme_iov_md": false 00:14:44.107 }, 00:14:44.107 "driver_specific": { 00:14:44.107 "raid": { 00:14:44.107 "uuid": "1f303c7b-0859-4fc5-9367-2a36f3007890", 00:14:44.107 "strip_size_kb": 64, 00:14:44.107 "state": "online", 00:14:44.107 "raid_level": "raid5f", 00:14:44.107 "superblock": false, 00:14:44.107 "num_base_bdevs": 4, 00:14:44.107 "num_base_bdevs_discovered": 4, 00:14:44.107 "num_base_bdevs_operational": 4, 00:14:44.107 "base_bdevs_list": [ 00:14:44.108 { 00:14:44.108 "name": "BaseBdev1", 00:14:44.108 "uuid": "118d7216-002f-46f7-ad66-d2c1af71830f", 00:14:44.108 "is_configured": true, 00:14:44.108 "data_offset": 0, 00:14:44.108 "data_size": 65536 00:14:44.108 }, 00:14:44.108 { 00:14:44.108 "name": "BaseBdev2", 00:14:44.108 "uuid": "de76a67b-12b0-4dd8-b0f1-054b56fec9a9", 00:14:44.108 "is_configured": true, 00:14:44.108 "data_offset": 0, 00:14:44.108 "data_size": 65536 00:14:44.108 }, 00:14:44.108 { 00:14:44.108 "name": "BaseBdev3", 00:14:44.108 "uuid": "fb55d7e2-9873-420c-8a8f-ba2a1ae9e459", 00:14:44.108 "is_configured": true, 00:14:44.108 "data_offset": 0, 00:14:44.108 "data_size": 65536 00:14:44.108 }, 00:14:44.108 { 00:14:44.108 "name": "BaseBdev4", 00:14:44.108 "uuid": "2a069b82-5a2f-4ec1-8a9a-339d40198c4b", 00:14:44.108 "is_configured": true, 00:14:44.108 "data_offset": 0, 00:14:44.108 "data_size": 65536 00:14:44.108 } 00:14:44.108 ] 00:14:44.108 } 00:14:44.108 } 00:14:44.108 }' 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:44.108 BaseBdev2 00:14:44.108 BaseBdev3 00:14:44.108 BaseBdev4' 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.108 16:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.108 [2024-12-07 16:40:43.001681] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.368 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.369 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.369 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.369 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.369 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.369 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.369 "name": "Existed_Raid", 00:14:44.369 "uuid": "1f303c7b-0859-4fc5-9367-2a36f3007890", 00:14:44.369 "strip_size_kb": 64, 00:14:44.369 "state": "online", 00:14:44.369 "raid_level": "raid5f", 00:14:44.369 "superblock": false, 00:14:44.369 "num_base_bdevs": 4, 00:14:44.369 "num_base_bdevs_discovered": 3, 00:14:44.369 "num_base_bdevs_operational": 3, 00:14:44.369 "base_bdevs_list": [ 00:14:44.369 { 00:14:44.369 "name": null, 00:14:44.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.369 "is_configured": false, 00:14:44.369 "data_offset": 0, 00:14:44.369 "data_size": 65536 00:14:44.369 }, 00:14:44.369 { 00:14:44.369 "name": "BaseBdev2", 00:14:44.369 "uuid": "de76a67b-12b0-4dd8-b0f1-054b56fec9a9", 00:14:44.369 "is_configured": true, 00:14:44.369 "data_offset": 0, 00:14:44.369 "data_size": 65536 00:14:44.369 }, 00:14:44.369 { 00:14:44.369 "name": "BaseBdev3", 00:14:44.369 "uuid": "fb55d7e2-9873-420c-8a8f-ba2a1ae9e459", 00:14:44.369 "is_configured": true, 00:14:44.369 "data_offset": 0, 00:14:44.369 "data_size": 65536 00:14:44.369 }, 00:14:44.369 { 00:14:44.369 "name": "BaseBdev4", 00:14:44.369 "uuid": "2a069b82-5a2f-4ec1-8a9a-339d40198c4b", 00:14:44.369 "is_configured": true, 00:14:44.369 "data_offset": 0, 00:14:44.369 "data_size": 65536 00:14:44.369 } 00:14:44.369 ] 00:14:44.369 }' 00:14:44.369 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.369 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.629 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:44.629 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.629 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.629 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.629 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.629 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.629 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.629 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.629 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.629 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:44.629 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.629 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.629 [2024-12-07 16:40:43.509365] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:44.629 [2024-12-07 16:40:43.509478] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.889 [2024-12-07 16:40:43.530019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.889 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.889 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.889 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.889 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.889 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.890 [2024-12-07 16:40:43.589931] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.890 [2024-12-07 16:40:43.670320] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:44.890 [2024-12-07 16:40:43.670378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.890 BaseBdev2 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.890 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.890 [ 00:14:44.890 { 00:14:44.890 "name": "BaseBdev2", 00:14:45.151 "aliases": [ 00:14:45.151 "6961e917-693c-42ae-a27c-5b1949880e50" 00:14:45.151 ], 00:14:45.151 "product_name": "Malloc disk", 00:14:45.151 "block_size": 512, 00:14:45.151 "num_blocks": 65536, 00:14:45.151 "uuid": "6961e917-693c-42ae-a27c-5b1949880e50", 00:14:45.151 "assigned_rate_limits": { 00:14:45.151 "rw_ios_per_sec": 0, 00:14:45.151 "rw_mbytes_per_sec": 0, 00:14:45.151 "r_mbytes_per_sec": 0, 00:14:45.151 "w_mbytes_per_sec": 0 00:14:45.151 }, 00:14:45.151 "claimed": false, 00:14:45.151 "zoned": false, 00:14:45.151 "supported_io_types": { 00:14:45.151 "read": true, 00:14:45.151 "write": true, 00:14:45.151 "unmap": true, 00:14:45.151 "flush": true, 00:14:45.151 "reset": true, 00:14:45.151 "nvme_admin": false, 00:14:45.151 "nvme_io": false, 00:14:45.151 "nvme_io_md": false, 00:14:45.151 "write_zeroes": true, 00:14:45.151 "zcopy": true, 00:14:45.151 "get_zone_info": false, 00:14:45.151 "zone_management": false, 00:14:45.151 "zone_append": false, 00:14:45.151 "compare": false, 00:14:45.151 "compare_and_write": false, 00:14:45.151 "abort": true, 00:14:45.151 "seek_hole": false, 00:14:45.151 "seek_data": false, 00:14:45.151 "copy": true, 00:14:45.151 "nvme_iov_md": false 00:14:45.151 }, 00:14:45.151 "memory_domains": [ 00:14:45.151 { 00:14:45.151 "dma_device_id": "system", 00:14:45.151 "dma_device_type": 1 00:14:45.151 }, 00:14:45.151 { 00:14:45.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.151 "dma_device_type": 2 00:14:45.151 } 00:14:45.151 ], 00:14:45.151 "driver_specific": {} 00:14:45.151 } 00:14:45.151 ] 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.151 BaseBdev3 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.151 [ 00:14:45.151 { 00:14:45.151 "name": "BaseBdev3", 00:14:45.151 "aliases": [ 00:14:45.151 "4541a03d-c5e7-44ad-91c7-8305f9c412d0" 00:14:45.151 ], 00:14:45.151 "product_name": "Malloc disk", 00:14:45.151 "block_size": 512, 00:14:45.151 "num_blocks": 65536, 00:14:45.151 "uuid": "4541a03d-c5e7-44ad-91c7-8305f9c412d0", 00:14:45.151 "assigned_rate_limits": { 00:14:45.151 "rw_ios_per_sec": 0, 00:14:45.151 "rw_mbytes_per_sec": 0, 00:14:45.151 "r_mbytes_per_sec": 0, 00:14:45.151 "w_mbytes_per_sec": 0 00:14:45.151 }, 00:14:45.151 "claimed": false, 00:14:45.151 "zoned": false, 00:14:45.151 "supported_io_types": { 00:14:45.151 "read": true, 00:14:45.151 "write": true, 00:14:45.151 "unmap": true, 00:14:45.151 "flush": true, 00:14:45.151 "reset": true, 00:14:45.151 "nvme_admin": false, 00:14:45.151 "nvme_io": false, 00:14:45.151 "nvme_io_md": false, 00:14:45.151 "write_zeroes": true, 00:14:45.151 "zcopy": true, 00:14:45.151 "get_zone_info": false, 00:14:45.151 "zone_management": false, 00:14:45.151 "zone_append": false, 00:14:45.151 "compare": false, 00:14:45.151 "compare_and_write": false, 00:14:45.151 "abort": true, 00:14:45.151 "seek_hole": false, 00:14:45.151 "seek_data": false, 00:14:45.151 "copy": true, 00:14:45.151 "nvme_iov_md": false 00:14:45.151 }, 00:14:45.151 "memory_domains": [ 00:14:45.151 { 00:14:45.151 "dma_device_id": "system", 00:14:45.151 "dma_device_type": 1 00:14:45.151 }, 00:14:45.151 { 00:14:45.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.151 "dma_device_type": 2 00:14:45.151 } 00:14:45.151 ], 00:14:45.151 "driver_specific": {} 00:14:45.151 } 00:14:45.151 ] 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.151 BaseBdev4 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.151 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.151 [ 00:14:45.151 { 00:14:45.151 "name": "BaseBdev4", 00:14:45.151 "aliases": [ 00:14:45.151 "775ab1dc-c131-40fd-ab60-a94d9f3bcc07" 00:14:45.151 ], 00:14:45.151 "product_name": "Malloc disk", 00:14:45.151 "block_size": 512, 00:14:45.151 "num_blocks": 65536, 00:14:45.151 "uuid": "775ab1dc-c131-40fd-ab60-a94d9f3bcc07", 00:14:45.151 "assigned_rate_limits": { 00:14:45.151 "rw_ios_per_sec": 0, 00:14:45.151 "rw_mbytes_per_sec": 0, 00:14:45.151 "r_mbytes_per_sec": 0, 00:14:45.151 "w_mbytes_per_sec": 0 00:14:45.151 }, 00:14:45.151 "claimed": false, 00:14:45.151 "zoned": false, 00:14:45.151 "supported_io_types": { 00:14:45.151 "read": true, 00:14:45.151 "write": true, 00:14:45.151 "unmap": true, 00:14:45.151 "flush": true, 00:14:45.151 "reset": true, 00:14:45.151 "nvme_admin": false, 00:14:45.151 "nvme_io": false, 00:14:45.151 "nvme_io_md": false, 00:14:45.151 "write_zeroes": true, 00:14:45.151 "zcopy": true, 00:14:45.151 "get_zone_info": false, 00:14:45.151 "zone_management": false, 00:14:45.151 "zone_append": false, 00:14:45.151 "compare": false, 00:14:45.151 "compare_and_write": false, 00:14:45.151 "abort": true, 00:14:45.151 "seek_hole": false, 00:14:45.151 "seek_data": false, 00:14:45.151 "copy": true, 00:14:45.151 "nvme_iov_md": false 00:14:45.151 }, 00:14:45.151 "memory_domains": [ 00:14:45.151 { 00:14:45.151 "dma_device_id": "system", 00:14:45.151 "dma_device_type": 1 00:14:45.152 }, 00:14:45.152 { 00:14:45.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.152 "dma_device_type": 2 00:14:45.152 } 00:14:45.152 ], 00:14:45.152 "driver_specific": {} 00:14:45.152 } 00:14:45.152 ] 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.152 [2024-12-07 16:40:43.922176] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.152 [2024-12-07 16:40:43.922295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.152 [2024-12-07 16:40:43.922338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.152 [2024-12-07 16:40:43.924483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.152 [2024-12-07 16:40:43.924570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.152 "name": "Existed_Raid", 00:14:45.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.152 "strip_size_kb": 64, 00:14:45.152 "state": "configuring", 00:14:45.152 "raid_level": "raid5f", 00:14:45.152 "superblock": false, 00:14:45.152 "num_base_bdevs": 4, 00:14:45.152 "num_base_bdevs_discovered": 3, 00:14:45.152 "num_base_bdevs_operational": 4, 00:14:45.152 "base_bdevs_list": [ 00:14:45.152 { 00:14:45.152 "name": "BaseBdev1", 00:14:45.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.152 "is_configured": false, 00:14:45.152 "data_offset": 0, 00:14:45.152 "data_size": 0 00:14:45.152 }, 00:14:45.152 { 00:14:45.152 "name": "BaseBdev2", 00:14:45.152 "uuid": "6961e917-693c-42ae-a27c-5b1949880e50", 00:14:45.152 "is_configured": true, 00:14:45.152 "data_offset": 0, 00:14:45.152 "data_size": 65536 00:14:45.152 }, 00:14:45.152 { 00:14:45.152 "name": "BaseBdev3", 00:14:45.152 "uuid": "4541a03d-c5e7-44ad-91c7-8305f9c412d0", 00:14:45.152 "is_configured": true, 00:14:45.152 "data_offset": 0, 00:14:45.152 "data_size": 65536 00:14:45.152 }, 00:14:45.152 { 00:14:45.152 "name": "BaseBdev4", 00:14:45.152 "uuid": "775ab1dc-c131-40fd-ab60-a94d9f3bcc07", 00:14:45.152 "is_configured": true, 00:14:45.152 "data_offset": 0, 00:14:45.152 "data_size": 65536 00:14:45.152 } 00:14:45.152 ] 00:14:45.152 }' 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.152 16:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.721 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.722 [2024-12-07 16:40:44.389366] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.722 "name": "Existed_Raid", 00:14:45.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.722 "strip_size_kb": 64, 00:14:45.722 "state": "configuring", 00:14:45.722 "raid_level": "raid5f", 00:14:45.722 "superblock": false, 00:14:45.722 "num_base_bdevs": 4, 00:14:45.722 "num_base_bdevs_discovered": 2, 00:14:45.722 "num_base_bdevs_operational": 4, 00:14:45.722 "base_bdevs_list": [ 00:14:45.722 { 00:14:45.722 "name": "BaseBdev1", 00:14:45.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.722 "is_configured": false, 00:14:45.722 "data_offset": 0, 00:14:45.722 "data_size": 0 00:14:45.722 }, 00:14:45.722 { 00:14:45.722 "name": null, 00:14:45.722 "uuid": "6961e917-693c-42ae-a27c-5b1949880e50", 00:14:45.722 "is_configured": false, 00:14:45.722 "data_offset": 0, 00:14:45.722 "data_size": 65536 00:14:45.722 }, 00:14:45.722 { 00:14:45.722 "name": "BaseBdev3", 00:14:45.722 "uuid": "4541a03d-c5e7-44ad-91c7-8305f9c412d0", 00:14:45.722 "is_configured": true, 00:14:45.722 "data_offset": 0, 00:14:45.722 "data_size": 65536 00:14:45.722 }, 00:14:45.722 { 00:14:45.722 "name": "BaseBdev4", 00:14:45.722 "uuid": "775ab1dc-c131-40fd-ab60-a94d9f3bcc07", 00:14:45.722 "is_configured": true, 00:14:45.722 "data_offset": 0, 00:14:45.722 "data_size": 65536 00:14:45.722 } 00:14:45.722 ] 00:14:45.722 }' 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.722 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.982 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:45.982 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.982 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.982 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.982 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.982 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:45.982 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.982 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.982 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.242 [2024-12-07 16:40:44.881269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.242 BaseBdev1 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.242 [ 00:14:46.242 { 00:14:46.242 "name": "BaseBdev1", 00:14:46.242 "aliases": [ 00:14:46.242 "bcf7b160-92fe-4feb-8dbc-189ed25ed114" 00:14:46.242 ], 00:14:46.242 "product_name": "Malloc disk", 00:14:46.242 "block_size": 512, 00:14:46.242 "num_blocks": 65536, 00:14:46.242 "uuid": "bcf7b160-92fe-4feb-8dbc-189ed25ed114", 00:14:46.242 "assigned_rate_limits": { 00:14:46.242 "rw_ios_per_sec": 0, 00:14:46.242 "rw_mbytes_per_sec": 0, 00:14:46.242 "r_mbytes_per_sec": 0, 00:14:46.242 "w_mbytes_per_sec": 0 00:14:46.242 }, 00:14:46.242 "claimed": true, 00:14:46.242 "claim_type": "exclusive_write", 00:14:46.242 "zoned": false, 00:14:46.242 "supported_io_types": { 00:14:46.242 "read": true, 00:14:46.242 "write": true, 00:14:46.242 "unmap": true, 00:14:46.242 "flush": true, 00:14:46.242 "reset": true, 00:14:46.242 "nvme_admin": false, 00:14:46.242 "nvme_io": false, 00:14:46.242 "nvme_io_md": false, 00:14:46.242 "write_zeroes": true, 00:14:46.242 "zcopy": true, 00:14:46.242 "get_zone_info": false, 00:14:46.242 "zone_management": false, 00:14:46.242 "zone_append": false, 00:14:46.242 "compare": false, 00:14:46.242 "compare_and_write": false, 00:14:46.242 "abort": true, 00:14:46.242 "seek_hole": false, 00:14:46.242 "seek_data": false, 00:14:46.242 "copy": true, 00:14:46.242 "nvme_iov_md": false 00:14:46.242 }, 00:14:46.242 "memory_domains": [ 00:14:46.242 { 00:14:46.242 "dma_device_id": "system", 00:14:46.242 "dma_device_type": 1 00:14:46.242 }, 00:14:46.242 { 00:14:46.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.242 "dma_device_type": 2 00:14:46.242 } 00:14:46.242 ], 00:14:46.242 "driver_specific": {} 00:14:46.242 } 00:14:46.242 ] 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.242 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.243 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.243 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.243 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.243 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.243 "name": "Existed_Raid", 00:14:46.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.243 "strip_size_kb": 64, 00:14:46.243 "state": "configuring", 00:14:46.243 "raid_level": "raid5f", 00:14:46.243 "superblock": false, 00:14:46.243 "num_base_bdevs": 4, 00:14:46.243 "num_base_bdevs_discovered": 3, 00:14:46.243 "num_base_bdevs_operational": 4, 00:14:46.243 "base_bdevs_list": [ 00:14:46.243 { 00:14:46.243 "name": "BaseBdev1", 00:14:46.243 "uuid": "bcf7b160-92fe-4feb-8dbc-189ed25ed114", 00:14:46.243 "is_configured": true, 00:14:46.243 "data_offset": 0, 00:14:46.243 "data_size": 65536 00:14:46.243 }, 00:14:46.243 { 00:14:46.243 "name": null, 00:14:46.243 "uuid": "6961e917-693c-42ae-a27c-5b1949880e50", 00:14:46.243 "is_configured": false, 00:14:46.243 "data_offset": 0, 00:14:46.243 "data_size": 65536 00:14:46.243 }, 00:14:46.243 { 00:14:46.243 "name": "BaseBdev3", 00:14:46.243 "uuid": "4541a03d-c5e7-44ad-91c7-8305f9c412d0", 00:14:46.243 "is_configured": true, 00:14:46.243 "data_offset": 0, 00:14:46.243 "data_size": 65536 00:14:46.243 }, 00:14:46.243 { 00:14:46.243 "name": "BaseBdev4", 00:14:46.243 "uuid": "775ab1dc-c131-40fd-ab60-a94d9f3bcc07", 00:14:46.243 "is_configured": true, 00:14:46.243 "data_offset": 0, 00:14:46.243 "data_size": 65536 00:14:46.243 } 00:14:46.243 ] 00:14:46.243 }' 00:14:46.243 16:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.243 16:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.501 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:46.502 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.502 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.502 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.502 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.760 [2024-12-07 16:40:45.416383] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.760 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.760 "name": "Existed_Raid", 00:14:46.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.760 "strip_size_kb": 64, 00:14:46.760 "state": "configuring", 00:14:46.760 "raid_level": "raid5f", 00:14:46.760 "superblock": false, 00:14:46.760 "num_base_bdevs": 4, 00:14:46.760 "num_base_bdevs_discovered": 2, 00:14:46.760 "num_base_bdevs_operational": 4, 00:14:46.760 "base_bdevs_list": [ 00:14:46.760 { 00:14:46.760 "name": "BaseBdev1", 00:14:46.760 "uuid": "bcf7b160-92fe-4feb-8dbc-189ed25ed114", 00:14:46.760 "is_configured": true, 00:14:46.760 "data_offset": 0, 00:14:46.760 "data_size": 65536 00:14:46.760 }, 00:14:46.760 { 00:14:46.760 "name": null, 00:14:46.760 "uuid": "6961e917-693c-42ae-a27c-5b1949880e50", 00:14:46.760 "is_configured": false, 00:14:46.760 "data_offset": 0, 00:14:46.760 "data_size": 65536 00:14:46.760 }, 00:14:46.760 { 00:14:46.760 "name": null, 00:14:46.760 "uuid": "4541a03d-c5e7-44ad-91c7-8305f9c412d0", 00:14:46.760 "is_configured": false, 00:14:46.760 "data_offset": 0, 00:14:46.760 "data_size": 65536 00:14:46.760 }, 00:14:46.760 { 00:14:46.760 "name": "BaseBdev4", 00:14:46.760 "uuid": "775ab1dc-c131-40fd-ab60-a94d9f3bcc07", 00:14:46.760 "is_configured": true, 00:14:46.761 "data_offset": 0, 00:14:46.761 "data_size": 65536 00:14:46.761 } 00:14:46.761 ] 00:14:46.761 }' 00:14:46.761 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.761 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.020 [2024-12-07 16:40:45.895591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.020 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.280 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.280 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.280 "name": "Existed_Raid", 00:14:47.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.280 "strip_size_kb": 64, 00:14:47.280 "state": "configuring", 00:14:47.280 "raid_level": "raid5f", 00:14:47.280 "superblock": false, 00:14:47.280 "num_base_bdevs": 4, 00:14:47.280 "num_base_bdevs_discovered": 3, 00:14:47.280 "num_base_bdevs_operational": 4, 00:14:47.280 "base_bdevs_list": [ 00:14:47.280 { 00:14:47.280 "name": "BaseBdev1", 00:14:47.280 "uuid": "bcf7b160-92fe-4feb-8dbc-189ed25ed114", 00:14:47.280 "is_configured": true, 00:14:47.280 "data_offset": 0, 00:14:47.280 "data_size": 65536 00:14:47.280 }, 00:14:47.280 { 00:14:47.280 "name": null, 00:14:47.280 "uuid": "6961e917-693c-42ae-a27c-5b1949880e50", 00:14:47.280 "is_configured": false, 00:14:47.280 "data_offset": 0, 00:14:47.280 "data_size": 65536 00:14:47.280 }, 00:14:47.280 { 00:14:47.280 "name": "BaseBdev3", 00:14:47.280 "uuid": "4541a03d-c5e7-44ad-91c7-8305f9c412d0", 00:14:47.280 "is_configured": true, 00:14:47.280 "data_offset": 0, 00:14:47.280 "data_size": 65536 00:14:47.280 }, 00:14:47.280 { 00:14:47.280 "name": "BaseBdev4", 00:14:47.280 "uuid": "775ab1dc-c131-40fd-ab60-a94d9f3bcc07", 00:14:47.280 "is_configured": true, 00:14:47.280 "data_offset": 0, 00:14:47.280 "data_size": 65536 00:14:47.280 } 00:14:47.280 ] 00:14:47.280 }' 00:14:47.280 16:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.280 16:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.540 [2024-12-07 16:40:46.395557] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.540 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.541 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.541 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.541 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.541 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.541 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.541 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.541 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.541 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.541 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.541 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.801 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.801 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.801 "name": "Existed_Raid", 00:14:47.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.801 "strip_size_kb": 64, 00:14:47.801 "state": "configuring", 00:14:47.801 "raid_level": "raid5f", 00:14:47.801 "superblock": false, 00:14:47.801 "num_base_bdevs": 4, 00:14:47.801 "num_base_bdevs_discovered": 2, 00:14:47.801 "num_base_bdevs_operational": 4, 00:14:47.801 "base_bdevs_list": [ 00:14:47.801 { 00:14:47.801 "name": null, 00:14:47.801 "uuid": "bcf7b160-92fe-4feb-8dbc-189ed25ed114", 00:14:47.801 "is_configured": false, 00:14:47.801 "data_offset": 0, 00:14:47.801 "data_size": 65536 00:14:47.801 }, 00:14:47.801 { 00:14:47.801 "name": null, 00:14:47.801 "uuid": "6961e917-693c-42ae-a27c-5b1949880e50", 00:14:47.801 "is_configured": false, 00:14:47.801 "data_offset": 0, 00:14:47.801 "data_size": 65536 00:14:47.801 }, 00:14:47.801 { 00:14:47.801 "name": "BaseBdev3", 00:14:47.801 "uuid": "4541a03d-c5e7-44ad-91c7-8305f9c412d0", 00:14:47.801 "is_configured": true, 00:14:47.801 "data_offset": 0, 00:14:47.801 "data_size": 65536 00:14:47.801 }, 00:14:47.801 { 00:14:47.801 "name": "BaseBdev4", 00:14:47.801 "uuid": "775ab1dc-c131-40fd-ab60-a94d9f3bcc07", 00:14:47.801 "is_configured": true, 00:14:47.801 "data_offset": 0, 00:14:47.801 "data_size": 65536 00:14:47.801 } 00:14:47.801 ] 00:14:47.801 }' 00:14:47.801 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.801 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.061 [2024-12-07 16:40:46.930272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.061 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.320 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.320 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.320 "name": "Existed_Raid", 00:14:48.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.320 "strip_size_kb": 64, 00:14:48.320 "state": "configuring", 00:14:48.320 "raid_level": "raid5f", 00:14:48.320 "superblock": false, 00:14:48.320 "num_base_bdevs": 4, 00:14:48.320 "num_base_bdevs_discovered": 3, 00:14:48.320 "num_base_bdevs_operational": 4, 00:14:48.320 "base_bdevs_list": [ 00:14:48.320 { 00:14:48.320 "name": null, 00:14:48.320 "uuid": "bcf7b160-92fe-4feb-8dbc-189ed25ed114", 00:14:48.320 "is_configured": false, 00:14:48.320 "data_offset": 0, 00:14:48.320 "data_size": 65536 00:14:48.320 }, 00:14:48.320 { 00:14:48.320 "name": "BaseBdev2", 00:14:48.321 "uuid": "6961e917-693c-42ae-a27c-5b1949880e50", 00:14:48.321 "is_configured": true, 00:14:48.321 "data_offset": 0, 00:14:48.321 "data_size": 65536 00:14:48.321 }, 00:14:48.321 { 00:14:48.321 "name": "BaseBdev3", 00:14:48.321 "uuid": "4541a03d-c5e7-44ad-91c7-8305f9c412d0", 00:14:48.321 "is_configured": true, 00:14:48.321 "data_offset": 0, 00:14:48.321 "data_size": 65536 00:14:48.321 }, 00:14:48.321 { 00:14:48.321 "name": "BaseBdev4", 00:14:48.321 "uuid": "775ab1dc-c131-40fd-ab60-a94d9f3bcc07", 00:14:48.321 "is_configured": true, 00:14:48.321 "data_offset": 0, 00:14:48.321 "data_size": 65536 00:14:48.321 } 00:14:48.321 ] 00:14:48.321 }' 00:14:48.321 16:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.321 16:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bcf7b160-92fe-4feb-8dbc-189ed25ed114 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.581 [2024-12-07 16:40:47.470103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:48.581 [2024-12-07 16:40:47.470161] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:48.581 [2024-12-07 16:40:47.470169] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:48.581 [2024-12-07 16:40:47.470492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:48.581 [2024-12-07 16:40:47.470966] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:48.581 [2024-12-07 16:40:47.470986] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:48.581 [2024-12-07 16:40:47.471200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.581 NewBaseBdev 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.581 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.841 [ 00:14:48.841 { 00:14:48.841 "name": "NewBaseBdev", 00:14:48.841 "aliases": [ 00:14:48.841 "bcf7b160-92fe-4feb-8dbc-189ed25ed114" 00:14:48.841 ], 00:14:48.841 "product_name": "Malloc disk", 00:14:48.841 "block_size": 512, 00:14:48.841 "num_blocks": 65536, 00:14:48.841 "uuid": "bcf7b160-92fe-4feb-8dbc-189ed25ed114", 00:14:48.841 "assigned_rate_limits": { 00:14:48.841 "rw_ios_per_sec": 0, 00:14:48.841 "rw_mbytes_per_sec": 0, 00:14:48.841 "r_mbytes_per_sec": 0, 00:14:48.841 "w_mbytes_per_sec": 0 00:14:48.841 }, 00:14:48.841 "claimed": true, 00:14:48.841 "claim_type": "exclusive_write", 00:14:48.841 "zoned": false, 00:14:48.841 "supported_io_types": { 00:14:48.841 "read": true, 00:14:48.841 "write": true, 00:14:48.841 "unmap": true, 00:14:48.841 "flush": true, 00:14:48.841 "reset": true, 00:14:48.841 "nvme_admin": false, 00:14:48.841 "nvme_io": false, 00:14:48.841 "nvme_io_md": false, 00:14:48.841 "write_zeroes": true, 00:14:48.841 "zcopy": true, 00:14:48.841 "get_zone_info": false, 00:14:48.841 "zone_management": false, 00:14:48.841 "zone_append": false, 00:14:48.841 "compare": false, 00:14:48.841 "compare_and_write": false, 00:14:48.841 "abort": true, 00:14:48.841 "seek_hole": false, 00:14:48.841 "seek_data": false, 00:14:48.841 "copy": true, 00:14:48.841 "nvme_iov_md": false 00:14:48.841 }, 00:14:48.841 "memory_domains": [ 00:14:48.841 { 00:14:48.841 "dma_device_id": "system", 00:14:48.841 "dma_device_type": 1 00:14:48.841 }, 00:14:48.841 { 00:14:48.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.841 "dma_device_type": 2 00:14:48.841 } 00:14:48.841 ], 00:14:48.841 "driver_specific": {} 00:14:48.841 } 00:14:48.841 ] 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.841 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.842 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.842 "name": "Existed_Raid", 00:14:48.842 "uuid": "b6223abb-2f59-40de-b91c-1c2bdabbeee0", 00:14:48.842 "strip_size_kb": 64, 00:14:48.842 "state": "online", 00:14:48.842 "raid_level": "raid5f", 00:14:48.842 "superblock": false, 00:14:48.842 "num_base_bdevs": 4, 00:14:48.842 "num_base_bdevs_discovered": 4, 00:14:48.842 "num_base_bdevs_operational": 4, 00:14:48.842 "base_bdevs_list": [ 00:14:48.842 { 00:14:48.842 "name": "NewBaseBdev", 00:14:48.842 "uuid": "bcf7b160-92fe-4feb-8dbc-189ed25ed114", 00:14:48.842 "is_configured": true, 00:14:48.842 "data_offset": 0, 00:14:48.842 "data_size": 65536 00:14:48.842 }, 00:14:48.842 { 00:14:48.842 "name": "BaseBdev2", 00:14:48.842 "uuid": "6961e917-693c-42ae-a27c-5b1949880e50", 00:14:48.842 "is_configured": true, 00:14:48.842 "data_offset": 0, 00:14:48.842 "data_size": 65536 00:14:48.842 }, 00:14:48.842 { 00:14:48.842 "name": "BaseBdev3", 00:14:48.842 "uuid": "4541a03d-c5e7-44ad-91c7-8305f9c412d0", 00:14:48.842 "is_configured": true, 00:14:48.842 "data_offset": 0, 00:14:48.842 "data_size": 65536 00:14:48.842 }, 00:14:48.842 { 00:14:48.842 "name": "BaseBdev4", 00:14:48.842 "uuid": "775ab1dc-c131-40fd-ab60-a94d9f3bcc07", 00:14:48.842 "is_configured": true, 00:14:48.842 "data_offset": 0, 00:14:48.842 "data_size": 65536 00:14:48.842 } 00:14:48.842 ] 00:14:48.842 }' 00:14:48.842 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.842 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.102 [2024-12-07 16:40:47.885612] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:49.102 "name": "Existed_Raid", 00:14:49.102 "aliases": [ 00:14:49.102 "b6223abb-2f59-40de-b91c-1c2bdabbeee0" 00:14:49.102 ], 00:14:49.102 "product_name": "Raid Volume", 00:14:49.102 "block_size": 512, 00:14:49.102 "num_blocks": 196608, 00:14:49.102 "uuid": "b6223abb-2f59-40de-b91c-1c2bdabbeee0", 00:14:49.102 "assigned_rate_limits": { 00:14:49.102 "rw_ios_per_sec": 0, 00:14:49.102 "rw_mbytes_per_sec": 0, 00:14:49.102 "r_mbytes_per_sec": 0, 00:14:49.102 "w_mbytes_per_sec": 0 00:14:49.102 }, 00:14:49.102 "claimed": false, 00:14:49.102 "zoned": false, 00:14:49.102 "supported_io_types": { 00:14:49.102 "read": true, 00:14:49.102 "write": true, 00:14:49.102 "unmap": false, 00:14:49.102 "flush": false, 00:14:49.102 "reset": true, 00:14:49.102 "nvme_admin": false, 00:14:49.102 "nvme_io": false, 00:14:49.102 "nvme_io_md": false, 00:14:49.102 "write_zeroes": true, 00:14:49.102 "zcopy": false, 00:14:49.102 "get_zone_info": false, 00:14:49.102 "zone_management": false, 00:14:49.102 "zone_append": false, 00:14:49.102 "compare": false, 00:14:49.102 "compare_and_write": false, 00:14:49.102 "abort": false, 00:14:49.102 "seek_hole": false, 00:14:49.102 "seek_data": false, 00:14:49.102 "copy": false, 00:14:49.102 "nvme_iov_md": false 00:14:49.102 }, 00:14:49.102 "driver_specific": { 00:14:49.102 "raid": { 00:14:49.102 "uuid": "b6223abb-2f59-40de-b91c-1c2bdabbeee0", 00:14:49.102 "strip_size_kb": 64, 00:14:49.102 "state": "online", 00:14:49.102 "raid_level": "raid5f", 00:14:49.102 "superblock": false, 00:14:49.102 "num_base_bdevs": 4, 00:14:49.102 "num_base_bdevs_discovered": 4, 00:14:49.102 "num_base_bdevs_operational": 4, 00:14:49.102 "base_bdevs_list": [ 00:14:49.102 { 00:14:49.102 "name": "NewBaseBdev", 00:14:49.102 "uuid": "bcf7b160-92fe-4feb-8dbc-189ed25ed114", 00:14:49.102 "is_configured": true, 00:14:49.102 "data_offset": 0, 00:14:49.102 "data_size": 65536 00:14:49.102 }, 00:14:49.102 { 00:14:49.102 "name": "BaseBdev2", 00:14:49.102 "uuid": "6961e917-693c-42ae-a27c-5b1949880e50", 00:14:49.102 "is_configured": true, 00:14:49.102 "data_offset": 0, 00:14:49.102 "data_size": 65536 00:14:49.102 }, 00:14:49.102 { 00:14:49.102 "name": "BaseBdev3", 00:14:49.102 "uuid": "4541a03d-c5e7-44ad-91c7-8305f9c412d0", 00:14:49.102 "is_configured": true, 00:14:49.102 "data_offset": 0, 00:14:49.102 "data_size": 65536 00:14:49.102 }, 00:14:49.102 { 00:14:49.102 "name": "BaseBdev4", 00:14:49.102 "uuid": "775ab1dc-c131-40fd-ab60-a94d9f3bcc07", 00:14:49.102 "is_configured": true, 00:14:49.102 "data_offset": 0, 00:14:49.102 "data_size": 65536 00:14:49.102 } 00:14:49.102 ] 00:14:49.102 } 00:14:49.102 } 00:14:49.102 }' 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:49.102 BaseBdev2 00:14:49.102 BaseBdev3 00:14:49.102 BaseBdev4' 00:14:49.102 16:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.363 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.363 [2024-12-07 16:40:48.220810] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.363 [2024-12-07 16:40:48.220840] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:49.363 [2024-12-07 16:40:48.220919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.363 [2024-12-07 16:40:48.221200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:49.363 [2024-12-07 16:40:48.221212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:49.364 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.364 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93571 00:14:49.364 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93571 ']' 00:14:49.364 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93571 00:14:49.364 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:49.364 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:49.364 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93571 00:14:49.624 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:49.624 killing process with pid 93571 00:14:49.624 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:49.624 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93571' 00:14:49.624 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93571 00:14:49.624 [2024-12-07 16:40:48.273182] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:49.624 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93571 00:14:49.624 [2024-12-07 16:40:48.348770] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.884 16:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:49.884 00:14:49.884 real 0m9.845s 00:14:49.884 user 0m16.466s 00:14:49.884 sys 0m2.276s 00:14:49.884 ************************************ 00:14:49.884 END TEST raid5f_state_function_test 00:14:49.884 ************************************ 00:14:49.884 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:49.884 16:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.884 16:40:48 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:49.884 16:40:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:49.884 16:40:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:49.884 16:40:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.146 ************************************ 00:14:50.146 START TEST raid5f_state_function_test_sb 00:14:50.146 ************************************ 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=94226 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94226' 00:14:50.146 Process raid pid: 94226 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 94226 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 94226 ']' 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.146 16:40:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.146 [2024-12-07 16:40:48.887365] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:50.146 [2024-12-07 16:40:48.888015] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.407 [2024-12-07 16:40:49.050212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.407 [2024-12-07 16:40:49.122693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.407 [2024-12-07 16:40:49.198433] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.407 [2024-12-07 16:40:49.198478] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.977 16:40:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.977 16:40:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:50.977 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.978 [2024-12-07 16:40:49.705529] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.978 [2024-12-07 16:40:49.705588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.978 [2024-12-07 16:40:49.705601] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.978 [2024-12-07 16:40:49.705610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.978 [2024-12-07 16:40:49.705616] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.978 [2024-12-07 16:40:49.705630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.978 [2024-12-07 16:40:49.705636] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:50.978 [2024-12-07 16:40:49.705646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.978 "name": "Existed_Raid", 00:14:50.978 "uuid": "5798084e-77b6-4b4d-97f8-39e2c0fc31cb", 00:14:50.978 "strip_size_kb": 64, 00:14:50.978 "state": "configuring", 00:14:50.978 "raid_level": "raid5f", 00:14:50.978 "superblock": true, 00:14:50.978 "num_base_bdevs": 4, 00:14:50.978 "num_base_bdevs_discovered": 0, 00:14:50.978 "num_base_bdevs_operational": 4, 00:14:50.978 "base_bdevs_list": [ 00:14:50.978 { 00:14:50.978 "name": "BaseBdev1", 00:14:50.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.978 "is_configured": false, 00:14:50.978 "data_offset": 0, 00:14:50.978 "data_size": 0 00:14:50.978 }, 00:14:50.978 { 00:14:50.978 "name": "BaseBdev2", 00:14:50.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.978 "is_configured": false, 00:14:50.978 "data_offset": 0, 00:14:50.978 "data_size": 0 00:14:50.978 }, 00:14:50.978 { 00:14:50.978 "name": "BaseBdev3", 00:14:50.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.978 "is_configured": false, 00:14:50.978 "data_offset": 0, 00:14:50.978 "data_size": 0 00:14:50.978 }, 00:14:50.978 { 00:14:50.978 "name": "BaseBdev4", 00:14:50.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.978 "is_configured": false, 00:14:50.978 "data_offset": 0, 00:14:50.978 "data_size": 0 00:14:50.978 } 00:14:50.978 ] 00:14:50.978 }' 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.978 16:40:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.548 [2024-12-07 16:40:50.160610] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.548 [2024-12-07 16:40:50.160709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.548 [2024-12-07 16:40:50.172654] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:51.548 [2024-12-07 16:40:50.172734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:51.548 [2024-12-07 16:40:50.172761] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.548 [2024-12-07 16:40:50.172784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.548 [2024-12-07 16:40:50.172801] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.548 [2024-12-07 16:40:50.172822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.548 [2024-12-07 16:40:50.172839] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:51.548 [2024-12-07 16:40:50.172859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.548 [2024-12-07 16:40:50.199512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.548 BaseBdev1 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:51.548 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.549 [ 00:14:51.549 { 00:14:51.549 "name": "BaseBdev1", 00:14:51.549 "aliases": [ 00:14:51.549 "16cba635-c541-413e-be6d-3172be866e7b" 00:14:51.549 ], 00:14:51.549 "product_name": "Malloc disk", 00:14:51.549 "block_size": 512, 00:14:51.549 "num_blocks": 65536, 00:14:51.549 "uuid": "16cba635-c541-413e-be6d-3172be866e7b", 00:14:51.549 "assigned_rate_limits": { 00:14:51.549 "rw_ios_per_sec": 0, 00:14:51.549 "rw_mbytes_per_sec": 0, 00:14:51.549 "r_mbytes_per_sec": 0, 00:14:51.549 "w_mbytes_per_sec": 0 00:14:51.549 }, 00:14:51.549 "claimed": true, 00:14:51.549 "claim_type": "exclusive_write", 00:14:51.549 "zoned": false, 00:14:51.549 "supported_io_types": { 00:14:51.549 "read": true, 00:14:51.549 "write": true, 00:14:51.549 "unmap": true, 00:14:51.549 "flush": true, 00:14:51.549 "reset": true, 00:14:51.549 "nvme_admin": false, 00:14:51.549 "nvme_io": false, 00:14:51.549 "nvme_io_md": false, 00:14:51.549 "write_zeroes": true, 00:14:51.549 "zcopy": true, 00:14:51.549 "get_zone_info": false, 00:14:51.549 "zone_management": false, 00:14:51.549 "zone_append": false, 00:14:51.549 "compare": false, 00:14:51.549 "compare_and_write": false, 00:14:51.549 "abort": true, 00:14:51.549 "seek_hole": false, 00:14:51.549 "seek_data": false, 00:14:51.549 "copy": true, 00:14:51.549 "nvme_iov_md": false 00:14:51.549 }, 00:14:51.549 "memory_domains": [ 00:14:51.549 { 00:14:51.549 "dma_device_id": "system", 00:14:51.549 "dma_device_type": 1 00:14:51.549 }, 00:14:51.549 { 00:14:51.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.549 "dma_device_type": 2 00:14:51.549 } 00:14:51.549 ], 00:14:51.549 "driver_specific": {} 00:14:51.549 } 00:14:51.549 ] 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.549 "name": "Existed_Raid", 00:14:51.549 "uuid": "686514f7-cfd8-4893-9b4d-b61a3859717c", 00:14:51.549 "strip_size_kb": 64, 00:14:51.549 "state": "configuring", 00:14:51.549 "raid_level": "raid5f", 00:14:51.549 "superblock": true, 00:14:51.549 "num_base_bdevs": 4, 00:14:51.549 "num_base_bdevs_discovered": 1, 00:14:51.549 "num_base_bdevs_operational": 4, 00:14:51.549 "base_bdevs_list": [ 00:14:51.549 { 00:14:51.549 "name": "BaseBdev1", 00:14:51.549 "uuid": "16cba635-c541-413e-be6d-3172be866e7b", 00:14:51.549 "is_configured": true, 00:14:51.549 "data_offset": 2048, 00:14:51.549 "data_size": 63488 00:14:51.549 }, 00:14:51.549 { 00:14:51.549 "name": "BaseBdev2", 00:14:51.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.549 "is_configured": false, 00:14:51.549 "data_offset": 0, 00:14:51.549 "data_size": 0 00:14:51.549 }, 00:14:51.549 { 00:14:51.549 "name": "BaseBdev3", 00:14:51.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.549 "is_configured": false, 00:14:51.549 "data_offset": 0, 00:14:51.549 "data_size": 0 00:14:51.549 }, 00:14:51.549 { 00:14:51.549 "name": "BaseBdev4", 00:14:51.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.549 "is_configured": false, 00:14:51.549 "data_offset": 0, 00:14:51.549 "data_size": 0 00:14:51.549 } 00:14:51.549 ] 00:14:51.549 }' 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.549 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.808 [2024-12-07 16:40:50.655493] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.808 [2024-12-07 16:40:50.655581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.808 [2024-12-07 16:40:50.667550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.808 [2024-12-07 16:40:50.669688] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.808 [2024-12-07 16:40:50.669731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.808 [2024-12-07 16:40:50.669741] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.808 [2024-12-07 16:40:50.669749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.808 [2024-12-07 16:40:50.669755] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:51.808 [2024-12-07 16:40:50.669763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.808 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.068 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.068 "name": "Existed_Raid", 00:14:52.068 "uuid": "f8d3014e-a99e-43fd-a57d-b02a86ea125b", 00:14:52.068 "strip_size_kb": 64, 00:14:52.068 "state": "configuring", 00:14:52.068 "raid_level": "raid5f", 00:14:52.068 "superblock": true, 00:14:52.068 "num_base_bdevs": 4, 00:14:52.068 "num_base_bdevs_discovered": 1, 00:14:52.068 "num_base_bdevs_operational": 4, 00:14:52.068 "base_bdevs_list": [ 00:14:52.068 { 00:14:52.068 "name": "BaseBdev1", 00:14:52.068 "uuid": "16cba635-c541-413e-be6d-3172be866e7b", 00:14:52.068 "is_configured": true, 00:14:52.068 "data_offset": 2048, 00:14:52.068 "data_size": 63488 00:14:52.068 }, 00:14:52.068 { 00:14:52.068 "name": "BaseBdev2", 00:14:52.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.068 "is_configured": false, 00:14:52.068 "data_offset": 0, 00:14:52.068 "data_size": 0 00:14:52.068 }, 00:14:52.068 { 00:14:52.068 "name": "BaseBdev3", 00:14:52.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.068 "is_configured": false, 00:14:52.068 "data_offset": 0, 00:14:52.068 "data_size": 0 00:14:52.068 }, 00:14:52.068 { 00:14:52.068 "name": "BaseBdev4", 00:14:52.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.068 "is_configured": false, 00:14:52.068 "data_offset": 0, 00:14:52.068 "data_size": 0 00:14:52.068 } 00:14:52.068 ] 00:14:52.068 }' 00:14:52.068 16:40:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.068 16:40:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.329 [2024-12-07 16:40:51.172306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.329 BaseBdev2 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.329 [ 00:14:52.329 { 00:14:52.329 "name": "BaseBdev2", 00:14:52.329 "aliases": [ 00:14:52.329 "a44d074a-4c8c-4f13-b051-772ba59ce9b7" 00:14:52.329 ], 00:14:52.329 "product_name": "Malloc disk", 00:14:52.329 "block_size": 512, 00:14:52.329 "num_blocks": 65536, 00:14:52.329 "uuid": "a44d074a-4c8c-4f13-b051-772ba59ce9b7", 00:14:52.329 "assigned_rate_limits": { 00:14:52.329 "rw_ios_per_sec": 0, 00:14:52.329 "rw_mbytes_per_sec": 0, 00:14:52.329 "r_mbytes_per_sec": 0, 00:14:52.329 "w_mbytes_per_sec": 0 00:14:52.329 }, 00:14:52.329 "claimed": true, 00:14:52.329 "claim_type": "exclusive_write", 00:14:52.329 "zoned": false, 00:14:52.329 "supported_io_types": { 00:14:52.329 "read": true, 00:14:52.329 "write": true, 00:14:52.329 "unmap": true, 00:14:52.329 "flush": true, 00:14:52.329 "reset": true, 00:14:52.329 "nvme_admin": false, 00:14:52.329 "nvme_io": false, 00:14:52.329 "nvme_io_md": false, 00:14:52.329 "write_zeroes": true, 00:14:52.329 "zcopy": true, 00:14:52.329 "get_zone_info": false, 00:14:52.329 "zone_management": false, 00:14:52.329 "zone_append": false, 00:14:52.329 "compare": false, 00:14:52.329 "compare_and_write": false, 00:14:52.329 "abort": true, 00:14:52.329 "seek_hole": false, 00:14:52.329 "seek_data": false, 00:14:52.329 "copy": true, 00:14:52.329 "nvme_iov_md": false 00:14:52.329 }, 00:14:52.329 "memory_domains": [ 00:14:52.329 { 00:14:52.329 "dma_device_id": "system", 00:14:52.329 "dma_device_type": 1 00:14:52.329 }, 00:14:52.329 { 00:14:52.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.329 "dma_device_type": 2 00:14:52.329 } 00:14:52.329 ], 00:14:52.329 "driver_specific": {} 00:14:52.329 } 00:14:52.329 ] 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.329 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.589 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.589 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.589 "name": "Existed_Raid", 00:14:52.589 "uuid": "f8d3014e-a99e-43fd-a57d-b02a86ea125b", 00:14:52.589 "strip_size_kb": 64, 00:14:52.589 "state": "configuring", 00:14:52.589 "raid_level": "raid5f", 00:14:52.589 "superblock": true, 00:14:52.589 "num_base_bdevs": 4, 00:14:52.589 "num_base_bdevs_discovered": 2, 00:14:52.589 "num_base_bdevs_operational": 4, 00:14:52.589 "base_bdevs_list": [ 00:14:52.589 { 00:14:52.589 "name": "BaseBdev1", 00:14:52.589 "uuid": "16cba635-c541-413e-be6d-3172be866e7b", 00:14:52.589 "is_configured": true, 00:14:52.589 "data_offset": 2048, 00:14:52.589 "data_size": 63488 00:14:52.589 }, 00:14:52.589 { 00:14:52.589 "name": "BaseBdev2", 00:14:52.589 "uuid": "a44d074a-4c8c-4f13-b051-772ba59ce9b7", 00:14:52.589 "is_configured": true, 00:14:52.589 "data_offset": 2048, 00:14:52.589 "data_size": 63488 00:14:52.589 }, 00:14:52.589 { 00:14:52.589 "name": "BaseBdev3", 00:14:52.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.589 "is_configured": false, 00:14:52.589 "data_offset": 0, 00:14:52.589 "data_size": 0 00:14:52.589 }, 00:14:52.589 { 00:14:52.589 "name": "BaseBdev4", 00:14:52.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.589 "is_configured": false, 00:14:52.589 "data_offset": 0, 00:14:52.589 "data_size": 0 00:14:52.589 } 00:14:52.589 ] 00:14:52.589 }' 00:14:52.589 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.589 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.849 [2024-12-07 16:40:51.688283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.849 BaseBdev3 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.849 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.849 [ 00:14:52.849 { 00:14:52.849 "name": "BaseBdev3", 00:14:52.849 "aliases": [ 00:14:52.849 "9830fbe0-b7f7-4eb1-8fe5-dc26c815ae8f" 00:14:52.849 ], 00:14:52.849 "product_name": "Malloc disk", 00:14:52.849 "block_size": 512, 00:14:52.849 "num_blocks": 65536, 00:14:52.849 "uuid": "9830fbe0-b7f7-4eb1-8fe5-dc26c815ae8f", 00:14:52.849 "assigned_rate_limits": { 00:14:52.849 "rw_ios_per_sec": 0, 00:14:52.849 "rw_mbytes_per_sec": 0, 00:14:52.849 "r_mbytes_per_sec": 0, 00:14:52.849 "w_mbytes_per_sec": 0 00:14:52.849 }, 00:14:52.849 "claimed": true, 00:14:52.850 "claim_type": "exclusive_write", 00:14:52.850 "zoned": false, 00:14:52.850 "supported_io_types": { 00:14:52.850 "read": true, 00:14:52.850 "write": true, 00:14:52.850 "unmap": true, 00:14:52.850 "flush": true, 00:14:52.850 "reset": true, 00:14:52.850 "nvme_admin": false, 00:14:52.850 "nvme_io": false, 00:14:52.850 "nvme_io_md": false, 00:14:52.850 "write_zeroes": true, 00:14:52.850 "zcopy": true, 00:14:52.850 "get_zone_info": false, 00:14:52.850 "zone_management": false, 00:14:52.850 "zone_append": false, 00:14:52.850 "compare": false, 00:14:52.850 "compare_and_write": false, 00:14:52.850 "abort": true, 00:14:52.850 "seek_hole": false, 00:14:52.850 "seek_data": false, 00:14:52.850 "copy": true, 00:14:52.850 "nvme_iov_md": false 00:14:52.850 }, 00:14:52.850 "memory_domains": [ 00:14:52.850 { 00:14:52.850 "dma_device_id": "system", 00:14:52.850 "dma_device_type": 1 00:14:52.850 }, 00:14:52.850 { 00:14:52.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.850 "dma_device_type": 2 00:14:52.850 } 00:14:52.850 ], 00:14:52.850 "driver_specific": {} 00:14:52.850 } 00:14:52.850 ] 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.850 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.109 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.109 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.109 "name": "Existed_Raid", 00:14:53.109 "uuid": "f8d3014e-a99e-43fd-a57d-b02a86ea125b", 00:14:53.109 "strip_size_kb": 64, 00:14:53.109 "state": "configuring", 00:14:53.109 "raid_level": "raid5f", 00:14:53.109 "superblock": true, 00:14:53.109 "num_base_bdevs": 4, 00:14:53.109 "num_base_bdevs_discovered": 3, 00:14:53.109 "num_base_bdevs_operational": 4, 00:14:53.109 "base_bdevs_list": [ 00:14:53.109 { 00:14:53.109 "name": "BaseBdev1", 00:14:53.109 "uuid": "16cba635-c541-413e-be6d-3172be866e7b", 00:14:53.109 "is_configured": true, 00:14:53.109 "data_offset": 2048, 00:14:53.109 "data_size": 63488 00:14:53.109 }, 00:14:53.109 { 00:14:53.109 "name": "BaseBdev2", 00:14:53.109 "uuid": "a44d074a-4c8c-4f13-b051-772ba59ce9b7", 00:14:53.109 "is_configured": true, 00:14:53.109 "data_offset": 2048, 00:14:53.109 "data_size": 63488 00:14:53.109 }, 00:14:53.109 { 00:14:53.109 "name": "BaseBdev3", 00:14:53.109 "uuid": "9830fbe0-b7f7-4eb1-8fe5-dc26c815ae8f", 00:14:53.109 "is_configured": true, 00:14:53.109 "data_offset": 2048, 00:14:53.109 "data_size": 63488 00:14:53.109 }, 00:14:53.109 { 00:14:53.109 "name": "BaseBdev4", 00:14:53.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.109 "is_configured": false, 00:14:53.109 "data_offset": 0, 00:14:53.110 "data_size": 0 00:14:53.110 } 00:14:53.110 ] 00:14:53.110 }' 00:14:53.110 16:40:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.110 16:40:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.404 [2024-12-07 16:40:52.244337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:53.404 [2024-12-07 16:40:52.244619] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:53.404 [2024-12-07 16:40:52.244637] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:53.404 BaseBdev4 00:14:53.404 [2024-12-07 16:40:52.244950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:53.404 [2024-12-07 16:40:52.245480] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:53.404 [2024-12-07 16:40:52.245502] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:53.404 [2024-12-07 16:40:52.245642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.404 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.404 [ 00:14:53.404 { 00:14:53.405 "name": "BaseBdev4", 00:14:53.405 "aliases": [ 00:14:53.405 "d0ca866a-2915-49a2-a81c-abdfe09a231c" 00:14:53.405 ], 00:14:53.405 "product_name": "Malloc disk", 00:14:53.405 "block_size": 512, 00:14:53.405 "num_blocks": 65536, 00:14:53.405 "uuid": "d0ca866a-2915-49a2-a81c-abdfe09a231c", 00:14:53.405 "assigned_rate_limits": { 00:14:53.405 "rw_ios_per_sec": 0, 00:14:53.405 "rw_mbytes_per_sec": 0, 00:14:53.405 "r_mbytes_per_sec": 0, 00:14:53.405 "w_mbytes_per_sec": 0 00:14:53.405 }, 00:14:53.405 "claimed": true, 00:14:53.405 "claim_type": "exclusive_write", 00:14:53.405 "zoned": false, 00:14:53.405 "supported_io_types": { 00:14:53.405 "read": true, 00:14:53.405 "write": true, 00:14:53.405 "unmap": true, 00:14:53.405 "flush": true, 00:14:53.405 "reset": true, 00:14:53.405 "nvme_admin": false, 00:14:53.405 "nvme_io": false, 00:14:53.405 "nvme_io_md": false, 00:14:53.405 "write_zeroes": true, 00:14:53.405 "zcopy": true, 00:14:53.405 "get_zone_info": false, 00:14:53.405 "zone_management": false, 00:14:53.405 "zone_append": false, 00:14:53.405 "compare": false, 00:14:53.405 "compare_and_write": false, 00:14:53.405 "abort": true, 00:14:53.405 "seek_hole": false, 00:14:53.405 "seek_data": false, 00:14:53.405 "copy": true, 00:14:53.405 "nvme_iov_md": false 00:14:53.405 }, 00:14:53.405 "memory_domains": [ 00:14:53.405 { 00:14:53.405 "dma_device_id": "system", 00:14:53.405 "dma_device_type": 1 00:14:53.405 }, 00:14:53.405 { 00:14:53.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.405 "dma_device_type": 2 00:14:53.405 } 00:14:53.405 ], 00:14:53.405 "driver_specific": {} 00:14:53.405 } 00:14:53.405 ] 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.405 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.664 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.664 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.664 "name": "Existed_Raid", 00:14:53.664 "uuid": "f8d3014e-a99e-43fd-a57d-b02a86ea125b", 00:14:53.664 "strip_size_kb": 64, 00:14:53.664 "state": "online", 00:14:53.664 "raid_level": "raid5f", 00:14:53.664 "superblock": true, 00:14:53.664 "num_base_bdevs": 4, 00:14:53.664 "num_base_bdevs_discovered": 4, 00:14:53.664 "num_base_bdevs_operational": 4, 00:14:53.664 "base_bdevs_list": [ 00:14:53.664 { 00:14:53.664 "name": "BaseBdev1", 00:14:53.664 "uuid": "16cba635-c541-413e-be6d-3172be866e7b", 00:14:53.664 "is_configured": true, 00:14:53.664 "data_offset": 2048, 00:14:53.664 "data_size": 63488 00:14:53.664 }, 00:14:53.664 { 00:14:53.664 "name": "BaseBdev2", 00:14:53.664 "uuid": "a44d074a-4c8c-4f13-b051-772ba59ce9b7", 00:14:53.664 "is_configured": true, 00:14:53.664 "data_offset": 2048, 00:14:53.664 "data_size": 63488 00:14:53.664 }, 00:14:53.664 { 00:14:53.664 "name": "BaseBdev3", 00:14:53.664 "uuid": "9830fbe0-b7f7-4eb1-8fe5-dc26c815ae8f", 00:14:53.664 "is_configured": true, 00:14:53.664 "data_offset": 2048, 00:14:53.664 "data_size": 63488 00:14:53.664 }, 00:14:53.664 { 00:14:53.664 "name": "BaseBdev4", 00:14:53.664 "uuid": "d0ca866a-2915-49a2-a81c-abdfe09a231c", 00:14:53.664 "is_configured": true, 00:14:53.664 "data_offset": 2048, 00:14:53.664 "data_size": 63488 00:14:53.664 } 00:14:53.664 ] 00:14:53.664 }' 00:14:53.664 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.664 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.929 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:53.929 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:53.929 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.929 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.929 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.929 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.929 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.929 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:53.929 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.929 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.929 [2024-12-07 16:40:52.755857] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.929 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.929 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.929 "name": "Existed_Raid", 00:14:53.929 "aliases": [ 00:14:53.929 "f8d3014e-a99e-43fd-a57d-b02a86ea125b" 00:14:53.929 ], 00:14:53.929 "product_name": "Raid Volume", 00:14:53.929 "block_size": 512, 00:14:53.929 "num_blocks": 190464, 00:14:53.929 "uuid": "f8d3014e-a99e-43fd-a57d-b02a86ea125b", 00:14:53.929 "assigned_rate_limits": { 00:14:53.929 "rw_ios_per_sec": 0, 00:14:53.929 "rw_mbytes_per_sec": 0, 00:14:53.929 "r_mbytes_per_sec": 0, 00:14:53.929 "w_mbytes_per_sec": 0 00:14:53.929 }, 00:14:53.929 "claimed": false, 00:14:53.929 "zoned": false, 00:14:53.929 "supported_io_types": { 00:14:53.929 "read": true, 00:14:53.929 "write": true, 00:14:53.929 "unmap": false, 00:14:53.929 "flush": false, 00:14:53.929 "reset": true, 00:14:53.929 "nvme_admin": false, 00:14:53.929 "nvme_io": false, 00:14:53.929 "nvme_io_md": false, 00:14:53.929 "write_zeroes": true, 00:14:53.929 "zcopy": false, 00:14:53.929 "get_zone_info": false, 00:14:53.929 "zone_management": false, 00:14:53.929 "zone_append": false, 00:14:53.929 "compare": false, 00:14:53.929 "compare_and_write": false, 00:14:53.929 "abort": false, 00:14:53.929 "seek_hole": false, 00:14:53.929 "seek_data": false, 00:14:53.929 "copy": false, 00:14:53.929 "nvme_iov_md": false 00:14:53.929 }, 00:14:53.929 "driver_specific": { 00:14:53.929 "raid": { 00:14:53.929 "uuid": "f8d3014e-a99e-43fd-a57d-b02a86ea125b", 00:14:53.929 "strip_size_kb": 64, 00:14:53.929 "state": "online", 00:14:53.929 "raid_level": "raid5f", 00:14:53.929 "superblock": true, 00:14:53.929 "num_base_bdevs": 4, 00:14:53.929 "num_base_bdevs_discovered": 4, 00:14:53.929 "num_base_bdevs_operational": 4, 00:14:53.929 "base_bdevs_list": [ 00:14:53.929 { 00:14:53.929 "name": "BaseBdev1", 00:14:53.929 "uuid": "16cba635-c541-413e-be6d-3172be866e7b", 00:14:53.929 "is_configured": true, 00:14:53.929 "data_offset": 2048, 00:14:53.929 "data_size": 63488 00:14:53.929 }, 00:14:53.929 { 00:14:53.929 "name": "BaseBdev2", 00:14:53.929 "uuid": "a44d074a-4c8c-4f13-b051-772ba59ce9b7", 00:14:53.929 "is_configured": true, 00:14:53.929 "data_offset": 2048, 00:14:53.929 "data_size": 63488 00:14:53.929 }, 00:14:53.929 { 00:14:53.929 "name": "BaseBdev3", 00:14:53.929 "uuid": "9830fbe0-b7f7-4eb1-8fe5-dc26c815ae8f", 00:14:53.929 "is_configured": true, 00:14:53.929 "data_offset": 2048, 00:14:53.929 "data_size": 63488 00:14:53.929 }, 00:14:53.929 { 00:14:53.929 "name": "BaseBdev4", 00:14:53.929 "uuid": "d0ca866a-2915-49a2-a81c-abdfe09a231c", 00:14:53.929 "is_configured": true, 00:14:53.929 "data_offset": 2048, 00:14:53.929 "data_size": 63488 00:14:53.929 } 00:14:53.929 ] 00:14:53.929 } 00:14:53.929 } 00:14:53.929 }' 00:14:53.929 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:54.197 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:54.197 BaseBdev2 00:14:54.197 BaseBdev3 00:14:54.197 BaseBdev4' 00:14:54.197 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.197 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:54.197 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.197 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:54.197 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.197 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.197 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.197 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.197 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.198 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.198 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.198 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:54.198 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.198 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.198 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.198 16:40:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.198 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.198 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.198 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.198 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:54.198 16:40:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.198 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.198 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.198 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.198 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.198 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.198 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.198 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.198 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:54.198 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.198 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.198 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.463 [2024-12-07 16:40:53.103517] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.463 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.463 "name": "Existed_Raid", 00:14:54.463 "uuid": "f8d3014e-a99e-43fd-a57d-b02a86ea125b", 00:14:54.463 "strip_size_kb": 64, 00:14:54.463 "state": "online", 00:14:54.463 "raid_level": "raid5f", 00:14:54.463 "superblock": true, 00:14:54.463 "num_base_bdevs": 4, 00:14:54.463 "num_base_bdevs_discovered": 3, 00:14:54.463 "num_base_bdevs_operational": 3, 00:14:54.463 "base_bdevs_list": [ 00:14:54.463 { 00:14:54.463 "name": null, 00:14:54.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.463 "is_configured": false, 00:14:54.463 "data_offset": 0, 00:14:54.463 "data_size": 63488 00:14:54.463 }, 00:14:54.463 { 00:14:54.463 "name": "BaseBdev2", 00:14:54.463 "uuid": "a44d074a-4c8c-4f13-b051-772ba59ce9b7", 00:14:54.464 "is_configured": true, 00:14:54.464 "data_offset": 2048, 00:14:54.464 "data_size": 63488 00:14:54.464 }, 00:14:54.464 { 00:14:54.464 "name": "BaseBdev3", 00:14:54.464 "uuid": "9830fbe0-b7f7-4eb1-8fe5-dc26c815ae8f", 00:14:54.464 "is_configured": true, 00:14:54.464 "data_offset": 2048, 00:14:54.464 "data_size": 63488 00:14:54.464 }, 00:14:54.464 { 00:14:54.464 "name": "BaseBdev4", 00:14:54.464 "uuid": "d0ca866a-2915-49a2-a81c-abdfe09a231c", 00:14:54.464 "is_configured": true, 00:14:54.464 "data_offset": 2048, 00:14:54.464 "data_size": 63488 00:14:54.464 } 00:14:54.464 ] 00:14:54.464 }' 00:14:54.464 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.464 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.727 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:54.727 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.727 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.727 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.727 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.727 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.986 [2024-12-07 16:40:53.663771] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:54.986 [2024-12-07 16:40:53.663998] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.986 [2024-12-07 16:40:53.684634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.986 [2024-12-07 16:40:53.744513] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.986 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.987 [2024-12-07 16:40:53.819935] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:54.987 [2024-12-07 16:40:53.820033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.987 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.247 BaseBdev2 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.247 [ 00:14:55.247 { 00:14:55.247 "name": "BaseBdev2", 00:14:55.247 "aliases": [ 00:14:55.247 "3ec8ec53-3f36-437f-9303-9a8d8d2b9dd6" 00:14:55.247 ], 00:14:55.247 "product_name": "Malloc disk", 00:14:55.247 "block_size": 512, 00:14:55.247 "num_blocks": 65536, 00:14:55.247 "uuid": "3ec8ec53-3f36-437f-9303-9a8d8d2b9dd6", 00:14:55.247 "assigned_rate_limits": { 00:14:55.247 "rw_ios_per_sec": 0, 00:14:55.247 "rw_mbytes_per_sec": 0, 00:14:55.247 "r_mbytes_per_sec": 0, 00:14:55.247 "w_mbytes_per_sec": 0 00:14:55.247 }, 00:14:55.247 "claimed": false, 00:14:55.247 "zoned": false, 00:14:55.247 "supported_io_types": { 00:14:55.247 "read": true, 00:14:55.247 "write": true, 00:14:55.247 "unmap": true, 00:14:55.247 "flush": true, 00:14:55.247 "reset": true, 00:14:55.247 "nvme_admin": false, 00:14:55.247 "nvme_io": false, 00:14:55.247 "nvme_io_md": false, 00:14:55.247 "write_zeroes": true, 00:14:55.247 "zcopy": true, 00:14:55.247 "get_zone_info": false, 00:14:55.247 "zone_management": false, 00:14:55.247 "zone_append": false, 00:14:55.247 "compare": false, 00:14:55.247 "compare_and_write": false, 00:14:55.247 "abort": true, 00:14:55.247 "seek_hole": false, 00:14:55.247 "seek_data": false, 00:14:55.247 "copy": true, 00:14:55.247 "nvme_iov_md": false 00:14:55.247 }, 00:14:55.247 "memory_domains": [ 00:14:55.247 { 00:14:55.247 "dma_device_id": "system", 00:14:55.247 "dma_device_type": 1 00:14:55.247 }, 00:14:55.247 { 00:14:55.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.247 "dma_device_type": 2 00:14:55.247 } 00:14:55.247 ], 00:14:55.247 "driver_specific": {} 00:14:55.247 } 00:14:55.247 ] 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.247 BaseBdev3 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.247 16:40:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.247 [ 00:14:55.247 { 00:14:55.247 "name": "BaseBdev3", 00:14:55.247 "aliases": [ 00:14:55.247 "79b5155a-f03c-4a17-9c65-04f751abb0e4" 00:14:55.247 ], 00:14:55.247 "product_name": "Malloc disk", 00:14:55.247 "block_size": 512, 00:14:55.247 "num_blocks": 65536, 00:14:55.247 "uuid": "79b5155a-f03c-4a17-9c65-04f751abb0e4", 00:14:55.247 "assigned_rate_limits": { 00:14:55.247 "rw_ios_per_sec": 0, 00:14:55.247 "rw_mbytes_per_sec": 0, 00:14:55.247 "r_mbytes_per_sec": 0, 00:14:55.247 "w_mbytes_per_sec": 0 00:14:55.247 }, 00:14:55.248 "claimed": false, 00:14:55.248 "zoned": false, 00:14:55.248 "supported_io_types": { 00:14:55.248 "read": true, 00:14:55.248 "write": true, 00:14:55.248 "unmap": true, 00:14:55.248 "flush": true, 00:14:55.248 "reset": true, 00:14:55.248 "nvme_admin": false, 00:14:55.248 "nvme_io": false, 00:14:55.248 "nvme_io_md": false, 00:14:55.248 "write_zeroes": true, 00:14:55.248 "zcopy": true, 00:14:55.248 "get_zone_info": false, 00:14:55.248 "zone_management": false, 00:14:55.248 "zone_append": false, 00:14:55.248 "compare": false, 00:14:55.248 "compare_and_write": false, 00:14:55.248 "abort": true, 00:14:55.248 "seek_hole": false, 00:14:55.248 "seek_data": false, 00:14:55.248 "copy": true, 00:14:55.248 "nvme_iov_md": false 00:14:55.248 }, 00:14:55.248 "memory_domains": [ 00:14:55.248 { 00:14:55.248 "dma_device_id": "system", 00:14:55.248 "dma_device_type": 1 00:14:55.248 }, 00:14:55.248 { 00:14:55.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.248 "dma_device_type": 2 00:14:55.248 } 00:14:55.248 ], 00:14:55.248 "driver_specific": {} 00:14:55.248 } 00:14:55.248 ] 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.248 BaseBdev4 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.248 [ 00:14:55.248 { 00:14:55.248 "name": "BaseBdev4", 00:14:55.248 "aliases": [ 00:14:55.248 "d9113d71-6af9-486f-a038-fe6a4203ab80" 00:14:55.248 ], 00:14:55.248 "product_name": "Malloc disk", 00:14:55.248 "block_size": 512, 00:14:55.248 "num_blocks": 65536, 00:14:55.248 "uuid": "d9113d71-6af9-486f-a038-fe6a4203ab80", 00:14:55.248 "assigned_rate_limits": { 00:14:55.248 "rw_ios_per_sec": 0, 00:14:55.248 "rw_mbytes_per_sec": 0, 00:14:55.248 "r_mbytes_per_sec": 0, 00:14:55.248 "w_mbytes_per_sec": 0 00:14:55.248 }, 00:14:55.248 "claimed": false, 00:14:55.248 "zoned": false, 00:14:55.248 "supported_io_types": { 00:14:55.248 "read": true, 00:14:55.248 "write": true, 00:14:55.248 "unmap": true, 00:14:55.248 "flush": true, 00:14:55.248 "reset": true, 00:14:55.248 "nvme_admin": false, 00:14:55.248 "nvme_io": false, 00:14:55.248 "nvme_io_md": false, 00:14:55.248 "write_zeroes": true, 00:14:55.248 "zcopy": true, 00:14:55.248 "get_zone_info": false, 00:14:55.248 "zone_management": false, 00:14:55.248 "zone_append": false, 00:14:55.248 "compare": false, 00:14:55.248 "compare_and_write": false, 00:14:55.248 "abort": true, 00:14:55.248 "seek_hole": false, 00:14:55.248 "seek_data": false, 00:14:55.248 "copy": true, 00:14:55.248 "nvme_iov_md": false 00:14:55.248 }, 00:14:55.248 "memory_domains": [ 00:14:55.248 { 00:14:55.248 "dma_device_id": "system", 00:14:55.248 "dma_device_type": 1 00:14:55.248 }, 00:14:55.248 { 00:14:55.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.248 "dma_device_type": 2 00:14:55.248 } 00:14:55.248 ], 00:14:55.248 "driver_specific": {} 00:14:55.248 } 00:14:55.248 ] 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.248 [2024-12-07 16:40:54.075496] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.248 [2024-12-07 16:40:54.075608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.248 [2024-12-07 16:40:54.075651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.248 [2024-12-07 16:40:54.077864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.248 [2024-12-07 16:40:54.077955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.248 "name": "Existed_Raid", 00:14:55.248 "uuid": "78069000-8628-44db-82d5-4b58fe4792a5", 00:14:55.248 "strip_size_kb": 64, 00:14:55.248 "state": "configuring", 00:14:55.248 "raid_level": "raid5f", 00:14:55.248 "superblock": true, 00:14:55.248 "num_base_bdevs": 4, 00:14:55.248 "num_base_bdevs_discovered": 3, 00:14:55.248 "num_base_bdevs_operational": 4, 00:14:55.248 "base_bdevs_list": [ 00:14:55.248 { 00:14:55.248 "name": "BaseBdev1", 00:14:55.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.248 "is_configured": false, 00:14:55.248 "data_offset": 0, 00:14:55.248 "data_size": 0 00:14:55.248 }, 00:14:55.248 { 00:14:55.248 "name": "BaseBdev2", 00:14:55.248 "uuid": "3ec8ec53-3f36-437f-9303-9a8d8d2b9dd6", 00:14:55.248 "is_configured": true, 00:14:55.248 "data_offset": 2048, 00:14:55.248 "data_size": 63488 00:14:55.248 }, 00:14:55.248 { 00:14:55.248 "name": "BaseBdev3", 00:14:55.248 "uuid": "79b5155a-f03c-4a17-9c65-04f751abb0e4", 00:14:55.248 "is_configured": true, 00:14:55.248 "data_offset": 2048, 00:14:55.248 "data_size": 63488 00:14:55.248 }, 00:14:55.248 { 00:14:55.248 "name": "BaseBdev4", 00:14:55.248 "uuid": "d9113d71-6af9-486f-a038-fe6a4203ab80", 00:14:55.248 "is_configured": true, 00:14:55.248 "data_offset": 2048, 00:14:55.248 "data_size": 63488 00:14:55.248 } 00:14:55.248 ] 00:14:55.248 }' 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.248 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.817 [2024-12-07 16:40:54.530673] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.817 "name": "Existed_Raid", 00:14:55.817 "uuid": "78069000-8628-44db-82d5-4b58fe4792a5", 00:14:55.817 "strip_size_kb": 64, 00:14:55.817 "state": "configuring", 00:14:55.817 "raid_level": "raid5f", 00:14:55.817 "superblock": true, 00:14:55.817 "num_base_bdevs": 4, 00:14:55.817 "num_base_bdevs_discovered": 2, 00:14:55.817 "num_base_bdevs_operational": 4, 00:14:55.817 "base_bdevs_list": [ 00:14:55.817 { 00:14:55.817 "name": "BaseBdev1", 00:14:55.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.817 "is_configured": false, 00:14:55.817 "data_offset": 0, 00:14:55.817 "data_size": 0 00:14:55.817 }, 00:14:55.817 { 00:14:55.817 "name": null, 00:14:55.817 "uuid": "3ec8ec53-3f36-437f-9303-9a8d8d2b9dd6", 00:14:55.817 "is_configured": false, 00:14:55.817 "data_offset": 0, 00:14:55.817 "data_size": 63488 00:14:55.817 }, 00:14:55.817 { 00:14:55.817 "name": "BaseBdev3", 00:14:55.817 "uuid": "79b5155a-f03c-4a17-9c65-04f751abb0e4", 00:14:55.817 "is_configured": true, 00:14:55.817 "data_offset": 2048, 00:14:55.817 "data_size": 63488 00:14:55.817 }, 00:14:55.817 { 00:14:55.817 "name": "BaseBdev4", 00:14:55.817 "uuid": "d9113d71-6af9-486f-a038-fe6a4203ab80", 00:14:55.817 "is_configured": true, 00:14:55.817 "data_offset": 2048, 00:14:55.817 "data_size": 63488 00:14:55.817 } 00:14:55.817 ] 00:14:55.817 }' 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.817 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.078 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.078 16:40:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:56.078 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.078 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.338 16:40:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.338 [2024-12-07 16:40:55.026616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.338 BaseBdev1 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.338 [ 00:14:56.338 { 00:14:56.338 "name": "BaseBdev1", 00:14:56.338 "aliases": [ 00:14:56.338 "90648b6f-8ff0-4ffc-bd20-a3172e030c05" 00:14:56.338 ], 00:14:56.338 "product_name": "Malloc disk", 00:14:56.338 "block_size": 512, 00:14:56.338 "num_blocks": 65536, 00:14:56.338 "uuid": "90648b6f-8ff0-4ffc-bd20-a3172e030c05", 00:14:56.338 "assigned_rate_limits": { 00:14:56.338 "rw_ios_per_sec": 0, 00:14:56.338 "rw_mbytes_per_sec": 0, 00:14:56.338 "r_mbytes_per_sec": 0, 00:14:56.338 "w_mbytes_per_sec": 0 00:14:56.338 }, 00:14:56.338 "claimed": true, 00:14:56.338 "claim_type": "exclusive_write", 00:14:56.338 "zoned": false, 00:14:56.338 "supported_io_types": { 00:14:56.338 "read": true, 00:14:56.338 "write": true, 00:14:56.338 "unmap": true, 00:14:56.338 "flush": true, 00:14:56.338 "reset": true, 00:14:56.338 "nvme_admin": false, 00:14:56.338 "nvme_io": false, 00:14:56.338 "nvme_io_md": false, 00:14:56.338 "write_zeroes": true, 00:14:56.338 "zcopy": true, 00:14:56.338 "get_zone_info": false, 00:14:56.338 "zone_management": false, 00:14:56.338 "zone_append": false, 00:14:56.338 "compare": false, 00:14:56.338 "compare_and_write": false, 00:14:56.338 "abort": true, 00:14:56.338 "seek_hole": false, 00:14:56.338 "seek_data": false, 00:14:56.338 "copy": true, 00:14:56.338 "nvme_iov_md": false 00:14:56.338 }, 00:14:56.338 "memory_domains": [ 00:14:56.338 { 00:14:56.338 "dma_device_id": "system", 00:14:56.338 "dma_device_type": 1 00:14:56.338 }, 00:14:56.338 { 00:14:56.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.338 "dma_device_type": 2 00:14:56.338 } 00:14:56.338 ], 00:14:56.338 "driver_specific": {} 00:14:56.338 } 00:14:56.338 ] 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.338 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.339 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.339 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.339 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.339 "name": "Existed_Raid", 00:14:56.339 "uuid": "78069000-8628-44db-82d5-4b58fe4792a5", 00:14:56.339 "strip_size_kb": 64, 00:14:56.339 "state": "configuring", 00:14:56.339 "raid_level": "raid5f", 00:14:56.339 "superblock": true, 00:14:56.339 "num_base_bdevs": 4, 00:14:56.339 "num_base_bdevs_discovered": 3, 00:14:56.339 "num_base_bdevs_operational": 4, 00:14:56.339 "base_bdevs_list": [ 00:14:56.339 { 00:14:56.339 "name": "BaseBdev1", 00:14:56.339 "uuid": "90648b6f-8ff0-4ffc-bd20-a3172e030c05", 00:14:56.339 "is_configured": true, 00:14:56.339 "data_offset": 2048, 00:14:56.339 "data_size": 63488 00:14:56.339 }, 00:14:56.339 { 00:14:56.339 "name": null, 00:14:56.339 "uuid": "3ec8ec53-3f36-437f-9303-9a8d8d2b9dd6", 00:14:56.339 "is_configured": false, 00:14:56.339 "data_offset": 0, 00:14:56.339 "data_size": 63488 00:14:56.339 }, 00:14:56.339 { 00:14:56.339 "name": "BaseBdev3", 00:14:56.339 "uuid": "79b5155a-f03c-4a17-9c65-04f751abb0e4", 00:14:56.339 "is_configured": true, 00:14:56.339 "data_offset": 2048, 00:14:56.339 "data_size": 63488 00:14:56.339 }, 00:14:56.339 { 00:14:56.339 "name": "BaseBdev4", 00:14:56.339 "uuid": "d9113d71-6af9-486f-a038-fe6a4203ab80", 00:14:56.339 "is_configured": true, 00:14:56.339 "data_offset": 2048, 00:14:56.339 "data_size": 63488 00:14:56.339 } 00:14:56.339 ] 00:14:56.339 }' 00:14:56.339 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.339 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.908 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:56.908 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.908 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.908 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.908 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.908 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:56.908 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:56.908 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.908 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.909 [2024-12-07 16:40:55.573694] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.909 "name": "Existed_Raid", 00:14:56.909 "uuid": "78069000-8628-44db-82d5-4b58fe4792a5", 00:14:56.909 "strip_size_kb": 64, 00:14:56.909 "state": "configuring", 00:14:56.909 "raid_level": "raid5f", 00:14:56.909 "superblock": true, 00:14:56.909 "num_base_bdevs": 4, 00:14:56.909 "num_base_bdevs_discovered": 2, 00:14:56.909 "num_base_bdevs_operational": 4, 00:14:56.909 "base_bdevs_list": [ 00:14:56.909 { 00:14:56.909 "name": "BaseBdev1", 00:14:56.909 "uuid": "90648b6f-8ff0-4ffc-bd20-a3172e030c05", 00:14:56.909 "is_configured": true, 00:14:56.909 "data_offset": 2048, 00:14:56.909 "data_size": 63488 00:14:56.909 }, 00:14:56.909 { 00:14:56.909 "name": null, 00:14:56.909 "uuid": "3ec8ec53-3f36-437f-9303-9a8d8d2b9dd6", 00:14:56.909 "is_configured": false, 00:14:56.909 "data_offset": 0, 00:14:56.909 "data_size": 63488 00:14:56.909 }, 00:14:56.909 { 00:14:56.909 "name": null, 00:14:56.909 "uuid": "79b5155a-f03c-4a17-9c65-04f751abb0e4", 00:14:56.909 "is_configured": false, 00:14:56.909 "data_offset": 0, 00:14:56.909 "data_size": 63488 00:14:56.909 }, 00:14:56.909 { 00:14:56.909 "name": "BaseBdev4", 00:14:56.909 "uuid": "d9113d71-6af9-486f-a038-fe6a4203ab80", 00:14:56.909 "is_configured": true, 00:14:56.909 "data_offset": 2048, 00:14:56.909 "data_size": 63488 00:14:56.909 } 00:14:56.909 ] 00:14:56.909 }' 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.909 16:40:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.180 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:57.180 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.181 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.181 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.442 [2024-12-07 16:40:56.092848] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.442 "name": "Existed_Raid", 00:14:57.442 "uuid": "78069000-8628-44db-82d5-4b58fe4792a5", 00:14:57.442 "strip_size_kb": 64, 00:14:57.442 "state": "configuring", 00:14:57.442 "raid_level": "raid5f", 00:14:57.442 "superblock": true, 00:14:57.442 "num_base_bdevs": 4, 00:14:57.442 "num_base_bdevs_discovered": 3, 00:14:57.442 "num_base_bdevs_operational": 4, 00:14:57.442 "base_bdevs_list": [ 00:14:57.442 { 00:14:57.442 "name": "BaseBdev1", 00:14:57.442 "uuid": "90648b6f-8ff0-4ffc-bd20-a3172e030c05", 00:14:57.442 "is_configured": true, 00:14:57.442 "data_offset": 2048, 00:14:57.442 "data_size": 63488 00:14:57.442 }, 00:14:57.442 { 00:14:57.442 "name": null, 00:14:57.442 "uuid": "3ec8ec53-3f36-437f-9303-9a8d8d2b9dd6", 00:14:57.442 "is_configured": false, 00:14:57.442 "data_offset": 0, 00:14:57.442 "data_size": 63488 00:14:57.442 }, 00:14:57.442 { 00:14:57.442 "name": "BaseBdev3", 00:14:57.442 "uuid": "79b5155a-f03c-4a17-9c65-04f751abb0e4", 00:14:57.442 "is_configured": true, 00:14:57.442 "data_offset": 2048, 00:14:57.442 "data_size": 63488 00:14:57.442 }, 00:14:57.442 { 00:14:57.442 "name": "BaseBdev4", 00:14:57.442 "uuid": "d9113d71-6af9-486f-a038-fe6a4203ab80", 00:14:57.442 "is_configured": true, 00:14:57.442 "data_offset": 2048, 00:14:57.442 "data_size": 63488 00:14:57.442 } 00:14:57.442 ] 00:14:57.442 }' 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.442 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.702 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.702 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.702 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.702 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:57.702 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.702 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:57.702 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:57.702 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.702 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.702 [2024-12-07 16:40:56.580031] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.961 "name": "Existed_Raid", 00:14:57.961 "uuid": "78069000-8628-44db-82d5-4b58fe4792a5", 00:14:57.961 "strip_size_kb": 64, 00:14:57.961 "state": "configuring", 00:14:57.961 "raid_level": "raid5f", 00:14:57.961 "superblock": true, 00:14:57.961 "num_base_bdevs": 4, 00:14:57.961 "num_base_bdevs_discovered": 2, 00:14:57.961 "num_base_bdevs_operational": 4, 00:14:57.961 "base_bdevs_list": [ 00:14:57.961 { 00:14:57.961 "name": null, 00:14:57.961 "uuid": "90648b6f-8ff0-4ffc-bd20-a3172e030c05", 00:14:57.961 "is_configured": false, 00:14:57.961 "data_offset": 0, 00:14:57.961 "data_size": 63488 00:14:57.961 }, 00:14:57.961 { 00:14:57.961 "name": null, 00:14:57.961 "uuid": "3ec8ec53-3f36-437f-9303-9a8d8d2b9dd6", 00:14:57.961 "is_configured": false, 00:14:57.961 "data_offset": 0, 00:14:57.961 "data_size": 63488 00:14:57.961 }, 00:14:57.961 { 00:14:57.961 "name": "BaseBdev3", 00:14:57.961 "uuid": "79b5155a-f03c-4a17-9c65-04f751abb0e4", 00:14:57.961 "is_configured": true, 00:14:57.961 "data_offset": 2048, 00:14:57.961 "data_size": 63488 00:14:57.961 }, 00:14:57.961 { 00:14:57.961 "name": "BaseBdev4", 00:14:57.961 "uuid": "d9113d71-6af9-486f-a038-fe6a4203ab80", 00:14:57.961 "is_configured": true, 00:14:57.961 "data_offset": 2048, 00:14:57.961 "data_size": 63488 00:14:57.961 } 00:14:57.961 ] 00:14:57.961 }' 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.961 16:40:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.219 [2024-12-07 16:40:57.079164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.219 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.478 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.478 "name": "Existed_Raid", 00:14:58.478 "uuid": "78069000-8628-44db-82d5-4b58fe4792a5", 00:14:58.478 "strip_size_kb": 64, 00:14:58.478 "state": "configuring", 00:14:58.478 "raid_level": "raid5f", 00:14:58.478 "superblock": true, 00:14:58.478 "num_base_bdevs": 4, 00:14:58.478 "num_base_bdevs_discovered": 3, 00:14:58.478 "num_base_bdevs_operational": 4, 00:14:58.478 "base_bdevs_list": [ 00:14:58.478 { 00:14:58.478 "name": null, 00:14:58.478 "uuid": "90648b6f-8ff0-4ffc-bd20-a3172e030c05", 00:14:58.478 "is_configured": false, 00:14:58.478 "data_offset": 0, 00:14:58.478 "data_size": 63488 00:14:58.478 }, 00:14:58.478 { 00:14:58.478 "name": "BaseBdev2", 00:14:58.478 "uuid": "3ec8ec53-3f36-437f-9303-9a8d8d2b9dd6", 00:14:58.478 "is_configured": true, 00:14:58.478 "data_offset": 2048, 00:14:58.478 "data_size": 63488 00:14:58.478 }, 00:14:58.478 { 00:14:58.478 "name": "BaseBdev3", 00:14:58.478 "uuid": "79b5155a-f03c-4a17-9c65-04f751abb0e4", 00:14:58.478 "is_configured": true, 00:14:58.478 "data_offset": 2048, 00:14:58.478 "data_size": 63488 00:14:58.478 }, 00:14:58.478 { 00:14:58.478 "name": "BaseBdev4", 00:14:58.478 "uuid": "d9113d71-6af9-486f-a038-fe6a4203ab80", 00:14:58.478 "is_configured": true, 00:14:58.478 "data_offset": 2048, 00:14:58.478 "data_size": 63488 00:14:58.478 } 00:14:58.478 ] 00:14:58.478 }' 00:14:58.478 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.478 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 90648b6f-8ff0-4ffc-bd20-a3172e030c05 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.739 [2024-12-07 16:40:57.570993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:58.739 [2024-12-07 16:40:57.571282] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:58.739 [2024-12-07 16:40:57.571332] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:58.739 NewBaseBdev 00:14:58.739 [2024-12-07 16:40:57.571672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:58.739 [2024-12-07 16:40:57.572148] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:58.739 [2024-12-07 16:40:57.572209] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:58.739 [2024-12-07 16:40:57.572374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.739 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.739 [ 00:14:58.739 { 00:14:58.739 "name": "NewBaseBdev", 00:14:58.739 "aliases": [ 00:14:58.739 "90648b6f-8ff0-4ffc-bd20-a3172e030c05" 00:14:58.739 ], 00:14:58.739 "product_name": "Malloc disk", 00:14:58.739 "block_size": 512, 00:14:58.739 "num_blocks": 65536, 00:14:58.739 "uuid": "90648b6f-8ff0-4ffc-bd20-a3172e030c05", 00:14:58.739 "assigned_rate_limits": { 00:14:58.739 "rw_ios_per_sec": 0, 00:14:58.739 "rw_mbytes_per_sec": 0, 00:14:58.739 "r_mbytes_per_sec": 0, 00:14:58.739 "w_mbytes_per_sec": 0 00:14:58.739 }, 00:14:58.739 "claimed": true, 00:14:58.739 "claim_type": "exclusive_write", 00:14:58.739 "zoned": false, 00:14:58.739 "supported_io_types": { 00:14:58.739 "read": true, 00:14:58.739 "write": true, 00:14:58.739 "unmap": true, 00:14:58.739 "flush": true, 00:14:58.739 "reset": true, 00:14:58.739 "nvme_admin": false, 00:14:58.739 "nvme_io": false, 00:14:58.739 "nvme_io_md": false, 00:14:58.739 "write_zeroes": true, 00:14:58.739 "zcopy": true, 00:14:58.739 "get_zone_info": false, 00:14:58.739 "zone_management": false, 00:14:58.739 "zone_append": false, 00:14:58.739 "compare": false, 00:14:58.739 "compare_and_write": false, 00:14:58.739 "abort": true, 00:14:58.739 "seek_hole": false, 00:14:58.739 "seek_data": false, 00:14:58.739 "copy": true, 00:14:58.739 "nvme_iov_md": false 00:14:58.739 }, 00:14:58.739 "memory_domains": [ 00:14:58.739 { 00:14:58.739 "dma_device_id": "system", 00:14:58.739 "dma_device_type": 1 00:14:58.739 }, 00:14:58.739 { 00:14:58.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.739 "dma_device_type": 2 00:14:58.740 } 00:14:58.740 ], 00:14:58.740 "driver_specific": {} 00:14:58.740 } 00:14:58.740 ] 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.740 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.999 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.999 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.999 "name": "Existed_Raid", 00:14:58.999 "uuid": "78069000-8628-44db-82d5-4b58fe4792a5", 00:14:58.999 "strip_size_kb": 64, 00:14:58.999 "state": "online", 00:14:58.999 "raid_level": "raid5f", 00:14:58.999 "superblock": true, 00:14:58.999 "num_base_bdevs": 4, 00:14:58.999 "num_base_bdevs_discovered": 4, 00:14:58.999 "num_base_bdevs_operational": 4, 00:14:58.999 "base_bdevs_list": [ 00:14:58.999 { 00:14:58.999 "name": "NewBaseBdev", 00:14:58.999 "uuid": "90648b6f-8ff0-4ffc-bd20-a3172e030c05", 00:14:58.999 "is_configured": true, 00:14:58.999 "data_offset": 2048, 00:14:58.999 "data_size": 63488 00:14:58.999 }, 00:14:58.999 { 00:14:58.999 "name": "BaseBdev2", 00:14:58.999 "uuid": "3ec8ec53-3f36-437f-9303-9a8d8d2b9dd6", 00:14:58.999 "is_configured": true, 00:14:58.999 "data_offset": 2048, 00:14:58.999 "data_size": 63488 00:14:58.999 }, 00:14:58.999 { 00:14:58.999 "name": "BaseBdev3", 00:14:58.999 "uuid": "79b5155a-f03c-4a17-9c65-04f751abb0e4", 00:14:58.999 "is_configured": true, 00:14:58.999 "data_offset": 2048, 00:14:58.999 "data_size": 63488 00:14:58.999 }, 00:14:58.999 { 00:14:59.000 "name": "BaseBdev4", 00:14:59.000 "uuid": "d9113d71-6af9-486f-a038-fe6a4203ab80", 00:14:59.000 "is_configured": true, 00:14:59.000 "data_offset": 2048, 00:14:59.000 "data_size": 63488 00:14:59.000 } 00:14:59.000 ] 00:14:59.000 }' 00:14:59.000 16:40:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.000 16:40:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.259 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:59.259 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:59.259 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.259 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.259 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.259 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.259 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:59.259 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.259 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.259 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.259 [2024-12-07 16:40:58.074399] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.259 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.259 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.259 "name": "Existed_Raid", 00:14:59.259 "aliases": [ 00:14:59.259 "78069000-8628-44db-82d5-4b58fe4792a5" 00:14:59.259 ], 00:14:59.259 "product_name": "Raid Volume", 00:14:59.259 "block_size": 512, 00:14:59.259 "num_blocks": 190464, 00:14:59.259 "uuid": "78069000-8628-44db-82d5-4b58fe4792a5", 00:14:59.259 "assigned_rate_limits": { 00:14:59.259 "rw_ios_per_sec": 0, 00:14:59.259 "rw_mbytes_per_sec": 0, 00:14:59.259 "r_mbytes_per_sec": 0, 00:14:59.259 "w_mbytes_per_sec": 0 00:14:59.259 }, 00:14:59.259 "claimed": false, 00:14:59.259 "zoned": false, 00:14:59.259 "supported_io_types": { 00:14:59.259 "read": true, 00:14:59.259 "write": true, 00:14:59.259 "unmap": false, 00:14:59.259 "flush": false, 00:14:59.259 "reset": true, 00:14:59.259 "nvme_admin": false, 00:14:59.259 "nvme_io": false, 00:14:59.259 "nvme_io_md": false, 00:14:59.259 "write_zeroes": true, 00:14:59.259 "zcopy": false, 00:14:59.259 "get_zone_info": false, 00:14:59.259 "zone_management": false, 00:14:59.259 "zone_append": false, 00:14:59.259 "compare": false, 00:14:59.259 "compare_and_write": false, 00:14:59.259 "abort": false, 00:14:59.259 "seek_hole": false, 00:14:59.259 "seek_data": false, 00:14:59.259 "copy": false, 00:14:59.259 "nvme_iov_md": false 00:14:59.259 }, 00:14:59.259 "driver_specific": { 00:14:59.259 "raid": { 00:14:59.259 "uuid": "78069000-8628-44db-82d5-4b58fe4792a5", 00:14:59.259 "strip_size_kb": 64, 00:14:59.259 "state": "online", 00:14:59.259 "raid_level": "raid5f", 00:14:59.259 "superblock": true, 00:14:59.259 "num_base_bdevs": 4, 00:14:59.259 "num_base_bdevs_discovered": 4, 00:14:59.259 "num_base_bdevs_operational": 4, 00:14:59.259 "base_bdevs_list": [ 00:14:59.259 { 00:14:59.259 "name": "NewBaseBdev", 00:14:59.259 "uuid": "90648b6f-8ff0-4ffc-bd20-a3172e030c05", 00:14:59.259 "is_configured": true, 00:14:59.259 "data_offset": 2048, 00:14:59.259 "data_size": 63488 00:14:59.259 }, 00:14:59.259 { 00:14:59.259 "name": "BaseBdev2", 00:14:59.260 "uuid": "3ec8ec53-3f36-437f-9303-9a8d8d2b9dd6", 00:14:59.260 "is_configured": true, 00:14:59.260 "data_offset": 2048, 00:14:59.260 "data_size": 63488 00:14:59.260 }, 00:14:59.260 { 00:14:59.260 "name": "BaseBdev3", 00:14:59.260 "uuid": "79b5155a-f03c-4a17-9c65-04f751abb0e4", 00:14:59.260 "is_configured": true, 00:14:59.260 "data_offset": 2048, 00:14:59.260 "data_size": 63488 00:14:59.260 }, 00:14:59.260 { 00:14:59.260 "name": "BaseBdev4", 00:14:59.260 "uuid": "d9113d71-6af9-486f-a038-fe6a4203ab80", 00:14:59.260 "is_configured": true, 00:14:59.260 "data_offset": 2048, 00:14:59.260 "data_size": 63488 00:14:59.260 } 00:14:59.260 ] 00:14:59.260 } 00:14:59.260 } 00:14:59.260 }' 00:14:59.260 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.260 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:59.260 BaseBdev2 00:14:59.260 BaseBdev3 00:14:59.260 BaseBdev4' 00:14:59.260 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.520 [2024-12-07 16:40:58.377645] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.520 [2024-12-07 16:40:58.377673] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.520 [2024-12-07 16:40:58.377758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.520 [2024-12-07 16:40:58.378038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.520 [2024-12-07 16:40:58.378050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 94226 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 94226 ']' 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 94226 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.520 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94226 00:14:59.780 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:59.780 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:59.780 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94226' 00:14:59.780 killing process with pid 94226 00:14:59.780 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 94226 00:14:59.780 [2024-12-07 16:40:58.429020] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.780 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 94226 00:14:59.780 [2024-12-07 16:40:58.504512] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.041 16:40:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:00.041 00:15:00.041 real 0m10.097s 00:15:00.041 user 0m16.902s 00:15:00.041 sys 0m2.276s 00:15:00.041 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:00.041 16:40:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.041 ************************************ 00:15:00.041 END TEST raid5f_state_function_test_sb 00:15:00.041 ************************************ 00:15:00.302 16:40:58 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:00.302 16:40:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:00.302 16:40:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:00.302 16:40:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:00.302 ************************************ 00:15:00.302 START TEST raid5f_superblock_test 00:15:00.302 ************************************ 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94880 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94880 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94880 ']' 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.302 16:40:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.302 [2024-12-07 16:40:59.063202] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:00.302 [2024-12-07 16:40:59.063460] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94880 ] 00:15:00.562 [2024-12-07 16:40:59.223878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.562 [2024-12-07 16:40:59.296408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.562 [2024-12-07 16:40:59.371993] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.562 [2024-12-07 16:40:59.372135] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.133 malloc1 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.133 [2024-12-07 16:40:59.901821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:01.133 [2024-12-07 16:40:59.901918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.133 [2024-12-07 16:40:59.901941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:01.133 [2024-12-07 16:40:59.901959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.133 [2024-12-07 16:40:59.904422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.133 [2024-12-07 16:40:59.904457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:01.133 pt1 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.133 malloc2 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.133 [2024-12-07 16:40:59.954740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.133 [2024-12-07 16:40:59.954991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.133 [2024-12-07 16:40:59.955104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:01.133 [2024-12-07 16:40:59.955196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.133 [2024-12-07 16:40:59.959602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.133 [2024-12-07 16:40:59.959724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.133 pt2 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.133 malloc3 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.133 16:40:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.133 [2024-12-07 16:40:59.995566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:01.133 [2024-12-07 16:40:59.995662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.133 [2024-12-07 16:40:59.995698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:01.133 [2024-12-07 16:40:59.995729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.133 [2024-12-07 16:40:59.998100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.133 [2024-12-07 16:40:59.998167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:01.133 pt3 00:15:01.133 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.134 malloc4 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.134 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.393 [2024-12-07 16:41:00.034143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:01.393 [2024-12-07 16:41:00.034241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.393 [2024-12-07 16:41:00.034274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:01.393 [2024-12-07 16:41:00.034307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.393 [2024-12-07 16:41:00.036704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.393 [2024-12-07 16:41:00.036782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:01.393 pt4 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.393 [2024-12-07 16:41:00.046235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:01.393 [2024-12-07 16:41:00.048417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.393 [2024-12-07 16:41:00.048478] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:01.393 [2024-12-07 16:41:00.048538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:01.393 [2024-12-07 16:41:00.048715] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:01.393 [2024-12-07 16:41:00.048728] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:01.393 [2024-12-07 16:41:00.048996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:01.393 [2024-12-07 16:41:00.049459] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:01.393 [2024-12-07 16:41:00.049470] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:01.393 [2024-12-07 16:41:00.049609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.393 "name": "raid_bdev1", 00:15:01.393 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:01.393 "strip_size_kb": 64, 00:15:01.393 "state": "online", 00:15:01.393 "raid_level": "raid5f", 00:15:01.393 "superblock": true, 00:15:01.393 "num_base_bdevs": 4, 00:15:01.393 "num_base_bdevs_discovered": 4, 00:15:01.393 "num_base_bdevs_operational": 4, 00:15:01.393 "base_bdevs_list": [ 00:15:01.393 { 00:15:01.393 "name": "pt1", 00:15:01.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.393 "is_configured": true, 00:15:01.393 "data_offset": 2048, 00:15:01.393 "data_size": 63488 00:15:01.393 }, 00:15:01.393 { 00:15:01.393 "name": "pt2", 00:15:01.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.393 "is_configured": true, 00:15:01.393 "data_offset": 2048, 00:15:01.393 "data_size": 63488 00:15:01.393 }, 00:15:01.393 { 00:15:01.393 "name": "pt3", 00:15:01.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.393 "is_configured": true, 00:15:01.393 "data_offset": 2048, 00:15:01.393 "data_size": 63488 00:15:01.393 }, 00:15:01.393 { 00:15:01.393 "name": "pt4", 00:15:01.393 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:01.393 "is_configured": true, 00:15:01.393 "data_offset": 2048, 00:15:01.393 "data_size": 63488 00:15:01.393 } 00:15:01.393 ] 00:15:01.393 }' 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.393 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.651 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:01.651 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:01.651 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:01.651 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:01.651 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:01.651 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:01.651 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:01.651 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:01.651 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.651 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.651 [2024-12-07 16:41:00.487941] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.651 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.651 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:01.651 "name": "raid_bdev1", 00:15:01.651 "aliases": [ 00:15:01.651 "73e79995-1950-40b3-80c4-3d0680739467" 00:15:01.651 ], 00:15:01.651 "product_name": "Raid Volume", 00:15:01.651 "block_size": 512, 00:15:01.651 "num_blocks": 190464, 00:15:01.651 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:01.651 "assigned_rate_limits": { 00:15:01.651 "rw_ios_per_sec": 0, 00:15:01.651 "rw_mbytes_per_sec": 0, 00:15:01.651 "r_mbytes_per_sec": 0, 00:15:01.651 "w_mbytes_per_sec": 0 00:15:01.651 }, 00:15:01.651 "claimed": false, 00:15:01.651 "zoned": false, 00:15:01.651 "supported_io_types": { 00:15:01.651 "read": true, 00:15:01.651 "write": true, 00:15:01.651 "unmap": false, 00:15:01.651 "flush": false, 00:15:01.651 "reset": true, 00:15:01.651 "nvme_admin": false, 00:15:01.651 "nvme_io": false, 00:15:01.651 "nvme_io_md": false, 00:15:01.651 "write_zeroes": true, 00:15:01.651 "zcopy": false, 00:15:01.651 "get_zone_info": false, 00:15:01.651 "zone_management": false, 00:15:01.651 "zone_append": false, 00:15:01.651 "compare": false, 00:15:01.651 "compare_and_write": false, 00:15:01.651 "abort": false, 00:15:01.651 "seek_hole": false, 00:15:01.651 "seek_data": false, 00:15:01.651 "copy": false, 00:15:01.651 "nvme_iov_md": false 00:15:01.651 }, 00:15:01.651 "driver_specific": { 00:15:01.651 "raid": { 00:15:01.652 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:01.652 "strip_size_kb": 64, 00:15:01.652 "state": "online", 00:15:01.652 "raid_level": "raid5f", 00:15:01.652 "superblock": true, 00:15:01.652 "num_base_bdevs": 4, 00:15:01.652 "num_base_bdevs_discovered": 4, 00:15:01.652 "num_base_bdevs_operational": 4, 00:15:01.652 "base_bdevs_list": [ 00:15:01.652 { 00:15:01.652 "name": "pt1", 00:15:01.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.652 "is_configured": true, 00:15:01.652 "data_offset": 2048, 00:15:01.652 "data_size": 63488 00:15:01.652 }, 00:15:01.652 { 00:15:01.652 "name": "pt2", 00:15:01.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.652 "is_configured": true, 00:15:01.652 "data_offset": 2048, 00:15:01.652 "data_size": 63488 00:15:01.652 }, 00:15:01.652 { 00:15:01.652 "name": "pt3", 00:15:01.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.652 "is_configured": true, 00:15:01.652 "data_offset": 2048, 00:15:01.652 "data_size": 63488 00:15:01.652 }, 00:15:01.652 { 00:15:01.652 "name": "pt4", 00:15:01.652 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:01.652 "is_configured": true, 00:15:01.652 "data_offset": 2048, 00:15:01.652 "data_size": 63488 00:15:01.652 } 00:15:01.652 ] 00:15:01.652 } 00:15:01.652 } 00:15:01.652 }' 00:15:01.652 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:01.911 pt2 00:15:01.911 pt3 00:15:01.911 pt4' 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.911 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.171 [2024-12-07 16:41:00.835617] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=73e79995-1950-40b3-80c4-3d0680739467 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 73e79995-1950-40b3-80c4-3d0680739467 ']' 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.171 [2024-12-07 16:41:00.879332] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.171 [2024-12-07 16:41:00.879415] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.171 [2024-12-07 16:41:00.879503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.171 [2024-12-07 16:41:00.879618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.171 [2024-12-07 16:41:00.879688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.171 16:41:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.171 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.171 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:02.171 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:02.171 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:02.171 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:02.171 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:02.171 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.171 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:02.171 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.171 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:02.171 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.171 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.171 [2024-12-07 16:41:01.023133] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:02.171 [2024-12-07 16:41:01.025318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:02.171 [2024-12-07 16:41:01.025420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:02.171 [2024-12-07 16:41:01.025469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:02.171 [2024-12-07 16:41:01.025544] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:02.171 [2024-12-07 16:41:01.025610] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:02.171 [2024-12-07 16:41:01.025665] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raidrequest: 00:15:02.171 { 00:15:02.171 bdev found on bdev malloc3 00:15:02.171 [2024-12-07 16:41:01.025744] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:02.171 [2024-12-07 16:41:01.025759] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.171 [2024-12-07 16:41:01.025770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:02.171 "name": "raid_bdev1", 00:15:02.171 "raid_level": "raid5f", 00:15:02.172 "base_bdevs": [ 00:15:02.172 "malloc1", 00:15:02.172 "malloc2", 00:15:02.172 "malloc3", 00:15:02.172 "malloc4" 00:15:02.172 ], 00:15:02.172 "strip_size_kb": 64, 00:15:02.172 "superblock": false, 00:15:02.172 "method": "bdev_raid_create", 00:15:02.172 "req_id": 1 00:15:02.172 } 00:15:02.172 Got JSON-RPC error response 00:15:02.172 response: 00:15:02.172 { 00:15:02.172 "code": -17, 00:15:02.172 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:02.172 } 00:15:02.172 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:02.172 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:02.172 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:02.172 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:02.172 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:02.172 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.172 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.172 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.172 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:02.172 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.431 [2024-12-07 16:41:01.090956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:02.431 [2024-12-07 16:41:01.091035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.431 [2024-12-07 16:41:01.091088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:02.431 [2024-12-07 16:41:01.091114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.431 [2024-12-07 16:41:01.093559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.431 [2024-12-07 16:41:01.093621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:02.431 [2024-12-07 16:41:01.093726] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:02.431 [2024-12-07 16:41:01.093784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:02.431 pt1 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.431 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.431 "name": "raid_bdev1", 00:15:02.431 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:02.431 "strip_size_kb": 64, 00:15:02.431 "state": "configuring", 00:15:02.431 "raid_level": "raid5f", 00:15:02.431 "superblock": true, 00:15:02.431 "num_base_bdevs": 4, 00:15:02.431 "num_base_bdevs_discovered": 1, 00:15:02.431 "num_base_bdevs_operational": 4, 00:15:02.431 "base_bdevs_list": [ 00:15:02.431 { 00:15:02.431 "name": "pt1", 00:15:02.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.431 "is_configured": true, 00:15:02.431 "data_offset": 2048, 00:15:02.431 "data_size": 63488 00:15:02.431 }, 00:15:02.431 { 00:15:02.431 "name": null, 00:15:02.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.431 "is_configured": false, 00:15:02.431 "data_offset": 2048, 00:15:02.431 "data_size": 63488 00:15:02.431 }, 00:15:02.431 { 00:15:02.431 "name": null, 00:15:02.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.431 "is_configured": false, 00:15:02.431 "data_offset": 2048, 00:15:02.431 "data_size": 63488 00:15:02.431 }, 00:15:02.431 { 00:15:02.431 "name": null, 00:15:02.431 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.431 "is_configured": false, 00:15:02.431 "data_offset": 2048, 00:15:02.432 "data_size": 63488 00:15:02.432 } 00:15:02.432 ] 00:15:02.432 }' 00:15:02.432 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.432 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.691 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:02.691 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:02.691 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.691 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.691 [2024-12-07 16:41:01.542184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:02.691 [2024-12-07 16:41:01.542232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.691 [2024-12-07 16:41:01.542268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:02.691 [2024-12-07 16:41:01.542276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.692 [2024-12-07 16:41:01.542672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.692 [2024-12-07 16:41:01.542692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:02.692 [2024-12-07 16:41:01.542756] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:02.692 [2024-12-07 16:41:01.542775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.692 pt2 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.692 [2024-12-07 16:41:01.554185] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.692 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.951 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.951 "name": "raid_bdev1", 00:15:02.951 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:02.951 "strip_size_kb": 64, 00:15:02.951 "state": "configuring", 00:15:02.951 "raid_level": "raid5f", 00:15:02.951 "superblock": true, 00:15:02.951 "num_base_bdevs": 4, 00:15:02.951 "num_base_bdevs_discovered": 1, 00:15:02.951 "num_base_bdevs_operational": 4, 00:15:02.951 "base_bdevs_list": [ 00:15:02.951 { 00:15:02.951 "name": "pt1", 00:15:02.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.951 "is_configured": true, 00:15:02.951 "data_offset": 2048, 00:15:02.951 "data_size": 63488 00:15:02.951 }, 00:15:02.951 { 00:15:02.951 "name": null, 00:15:02.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.951 "is_configured": false, 00:15:02.951 "data_offset": 0, 00:15:02.951 "data_size": 63488 00:15:02.951 }, 00:15:02.951 { 00:15:02.951 "name": null, 00:15:02.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.951 "is_configured": false, 00:15:02.951 "data_offset": 2048, 00:15:02.951 "data_size": 63488 00:15:02.951 }, 00:15:02.951 { 00:15:02.951 "name": null, 00:15:02.951 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.951 "is_configured": false, 00:15:02.951 "data_offset": 2048, 00:15:02.951 "data_size": 63488 00:15:02.951 } 00:15:02.951 ] 00:15:02.951 }' 00:15:02.951 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.951 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.213 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:03.213 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:03.213 16:41:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.213 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.213 16:41:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.213 [2024-12-07 16:41:02.005395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.213 [2024-12-07 16:41:02.005507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.213 [2024-12-07 16:41:02.005539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:03.213 [2024-12-07 16:41:02.005568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.213 [2024-12-07 16:41:02.005960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.213 [2024-12-07 16:41:02.006018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.213 [2024-12-07 16:41:02.006107] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:03.213 [2024-12-07 16:41:02.006154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.213 pt2 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.213 [2024-12-07 16:41:02.017346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:03.213 [2024-12-07 16:41:02.017441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.213 [2024-12-07 16:41:02.017482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:03.213 [2024-12-07 16:41:02.017513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.213 [2024-12-07 16:41:02.017864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.213 [2024-12-07 16:41:02.017916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:03.213 [2024-12-07 16:41:02.017990] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:03.213 [2024-12-07 16:41:02.018041] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:03.213 pt3 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.213 [2024-12-07 16:41:02.029328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:03.213 [2024-12-07 16:41:02.029386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.213 [2024-12-07 16:41:02.029401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:03.213 [2024-12-07 16:41:02.029411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.213 [2024-12-07 16:41:02.029715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.213 [2024-12-07 16:41:02.029733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:03.213 [2024-12-07 16:41:02.029788] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:03.213 [2024-12-07 16:41:02.029806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:03.213 [2024-12-07 16:41:02.029915] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:03.213 [2024-12-07 16:41:02.029930] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:03.213 [2024-12-07 16:41:02.030188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:03.213 [2024-12-07 16:41:02.030707] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:03.213 [2024-12-07 16:41:02.030766] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:03.213 [2024-12-07 16:41:02.030875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.213 pt4 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.213 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.213 "name": "raid_bdev1", 00:15:03.213 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:03.213 "strip_size_kb": 64, 00:15:03.213 "state": "online", 00:15:03.213 "raid_level": "raid5f", 00:15:03.213 "superblock": true, 00:15:03.213 "num_base_bdevs": 4, 00:15:03.213 "num_base_bdevs_discovered": 4, 00:15:03.214 "num_base_bdevs_operational": 4, 00:15:03.214 "base_bdevs_list": [ 00:15:03.214 { 00:15:03.214 "name": "pt1", 00:15:03.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.214 "is_configured": true, 00:15:03.214 "data_offset": 2048, 00:15:03.214 "data_size": 63488 00:15:03.214 }, 00:15:03.214 { 00:15:03.214 "name": "pt2", 00:15:03.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.214 "is_configured": true, 00:15:03.214 "data_offset": 2048, 00:15:03.214 "data_size": 63488 00:15:03.214 }, 00:15:03.214 { 00:15:03.214 "name": "pt3", 00:15:03.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.214 "is_configured": true, 00:15:03.214 "data_offset": 2048, 00:15:03.214 "data_size": 63488 00:15:03.214 }, 00:15:03.214 { 00:15:03.214 "name": "pt4", 00:15:03.214 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:03.214 "is_configured": true, 00:15:03.214 "data_offset": 2048, 00:15:03.214 "data_size": 63488 00:15:03.214 } 00:15:03.214 ] 00:15:03.214 }' 00:15:03.214 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.214 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.784 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:03.784 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:03.784 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:03.784 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:03.784 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:03.784 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:03.784 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:03.784 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:03.784 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.784 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.784 [2024-12-07 16:41:02.524956] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.784 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.784 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:03.784 "name": "raid_bdev1", 00:15:03.784 "aliases": [ 00:15:03.784 "73e79995-1950-40b3-80c4-3d0680739467" 00:15:03.784 ], 00:15:03.784 "product_name": "Raid Volume", 00:15:03.784 "block_size": 512, 00:15:03.784 "num_blocks": 190464, 00:15:03.784 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:03.784 "assigned_rate_limits": { 00:15:03.784 "rw_ios_per_sec": 0, 00:15:03.784 "rw_mbytes_per_sec": 0, 00:15:03.784 "r_mbytes_per_sec": 0, 00:15:03.784 "w_mbytes_per_sec": 0 00:15:03.784 }, 00:15:03.784 "claimed": false, 00:15:03.784 "zoned": false, 00:15:03.784 "supported_io_types": { 00:15:03.784 "read": true, 00:15:03.784 "write": true, 00:15:03.784 "unmap": false, 00:15:03.784 "flush": false, 00:15:03.784 "reset": true, 00:15:03.784 "nvme_admin": false, 00:15:03.784 "nvme_io": false, 00:15:03.784 "nvme_io_md": false, 00:15:03.784 "write_zeroes": true, 00:15:03.784 "zcopy": false, 00:15:03.784 "get_zone_info": false, 00:15:03.784 "zone_management": false, 00:15:03.784 "zone_append": false, 00:15:03.784 "compare": false, 00:15:03.784 "compare_and_write": false, 00:15:03.784 "abort": false, 00:15:03.784 "seek_hole": false, 00:15:03.784 "seek_data": false, 00:15:03.784 "copy": false, 00:15:03.784 "nvme_iov_md": false 00:15:03.784 }, 00:15:03.784 "driver_specific": { 00:15:03.784 "raid": { 00:15:03.784 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:03.784 "strip_size_kb": 64, 00:15:03.784 "state": "online", 00:15:03.784 "raid_level": "raid5f", 00:15:03.784 "superblock": true, 00:15:03.784 "num_base_bdevs": 4, 00:15:03.784 "num_base_bdevs_discovered": 4, 00:15:03.784 "num_base_bdevs_operational": 4, 00:15:03.784 "base_bdevs_list": [ 00:15:03.784 { 00:15:03.784 "name": "pt1", 00:15:03.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.784 "is_configured": true, 00:15:03.784 "data_offset": 2048, 00:15:03.784 "data_size": 63488 00:15:03.784 }, 00:15:03.784 { 00:15:03.784 "name": "pt2", 00:15:03.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.784 "is_configured": true, 00:15:03.784 "data_offset": 2048, 00:15:03.784 "data_size": 63488 00:15:03.784 }, 00:15:03.784 { 00:15:03.784 "name": "pt3", 00:15:03.784 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.784 "is_configured": true, 00:15:03.784 "data_offset": 2048, 00:15:03.784 "data_size": 63488 00:15:03.784 }, 00:15:03.784 { 00:15:03.785 "name": "pt4", 00:15:03.785 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:03.785 "is_configured": true, 00:15:03.785 "data_offset": 2048, 00:15:03.785 "data_size": 63488 00:15:03.785 } 00:15:03.785 ] 00:15:03.785 } 00:15:03.785 } 00:15:03.785 }' 00:15:03.785 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.785 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:03.785 pt2 00:15:03.785 pt3 00:15:03.785 pt4' 00:15:03.785 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.785 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:03.785 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.785 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:03.785 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.785 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.785 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.045 [2024-12-07 16:41:02.852342] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 73e79995-1950-40b3-80c4-3d0680739467 '!=' 73e79995-1950-40b3-80c4-3d0680739467 ']' 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:04.045 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.046 [2024-12-07 16:41:02.896142] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.046 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.305 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.305 "name": "raid_bdev1", 00:15:04.305 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:04.305 "strip_size_kb": 64, 00:15:04.305 "state": "online", 00:15:04.305 "raid_level": "raid5f", 00:15:04.305 "superblock": true, 00:15:04.305 "num_base_bdevs": 4, 00:15:04.305 "num_base_bdevs_discovered": 3, 00:15:04.305 "num_base_bdevs_operational": 3, 00:15:04.305 "base_bdevs_list": [ 00:15:04.305 { 00:15:04.305 "name": null, 00:15:04.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.305 "is_configured": false, 00:15:04.305 "data_offset": 0, 00:15:04.305 "data_size": 63488 00:15:04.305 }, 00:15:04.305 { 00:15:04.305 "name": "pt2", 00:15:04.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.305 "is_configured": true, 00:15:04.305 "data_offset": 2048, 00:15:04.305 "data_size": 63488 00:15:04.305 }, 00:15:04.305 { 00:15:04.305 "name": "pt3", 00:15:04.305 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.305 "is_configured": true, 00:15:04.305 "data_offset": 2048, 00:15:04.305 "data_size": 63488 00:15:04.305 }, 00:15:04.305 { 00:15:04.305 "name": "pt4", 00:15:04.305 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.305 "is_configured": true, 00:15:04.305 "data_offset": 2048, 00:15:04.305 "data_size": 63488 00:15:04.305 } 00:15:04.305 ] 00:15:04.305 }' 00:15:04.305 16:41:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.305 16:41:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.565 [2024-12-07 16:41:03.359413] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.565 [2024-12-07 16:41:03.359440] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.565 [2024-12-07 16:41:03.359532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.565 [2024-12-07 16:41:03.359608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.565 [2024-12-07 16:41:03.359621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.565 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.566 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.566 [2024-12-07 16:41:03.459173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:04.566 [2024-12-07 16:41:03.459275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.566 [2024-12-07 16:41:03.459298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:04.566 [2024-12-07 16:41:03.459309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.566 [2024-12-07 16:41:03.461839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.566 [2024-12-07 16:41:03.461879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:04.566 [2024-12-07 16:41:03.461954] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:04.566 [2024-12-07 16:41:03.461990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:04.826 pt2 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.826 "name": "raid_bdev1", 00:15:04.826 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:04.826 "strip_size_kb": 64, 00:15:04.826 "state": "configuring", 00:15:04.826 "raid_level": "raid5f", 00:15:04.826 "superblock": true, 00:15:04.826 "num_base_bdevs": 4, 00:15:04.826 "num_base_bdevs_discovered": 1, 00:15:04.826 "num_base_bdevs_operational": 3, 00:15:04.826 "base_bdevs_list": [ 00:15:04.826 { 00:15:04.826 "name": null, 00:15:04.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.826 "is_configured": false, 00:15:04.826 "data_offset": 2048, 00:15:04.826 "data_size": 63488 00:15:04.826 }, 00:15:04.826 { 00:15:04.826 "name": "pt2", 00:15:04.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.826 "is_configured": true, 00:15:04.826 "data_offset": 2048, 00:15:04.826 "data_size": 63488 00:15:04.826 }, 00:15:04.826 { 00:15:04.826 "name": null, 00:15:04.826 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.826 "is_configured": false, 00:15:04.826 "data_offset": 2048, 00:15:04.826 "data_size": 63488 00:15:04.826 }, 00:15:04.826 { 00:15:04.826 "name": null, 00:15:04.826 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.826 "is_configured": false, 00:15:04.826 "data_offset": 2048, 00:15:04.826 "data_size": 63488 00:15:04.826 } 00:15:04.826 ] 00:15:04.826 }' 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.826 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.086 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:05.086 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:05.086 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:05.086 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.086 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.086 [2024-12-07 16:41:03.926377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:05.086 [2024-12-07 16:41:03.926486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.086 [2024-12-07 16:41:03.926521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:05.086 [2024-12-07 16:41:03.926553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.086 [2024-12-07 16:41:03.926984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.086 [2024-12-07 16:41:03.927038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:05.086 [2024-12-07 16:41:03.927130] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:05.086 [2024-12-07 16:41:03.927190] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:05.086 pt3 00:15:05.086 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.087 "name": "raid_bdev1", 00:15:05.087 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:05.087 "strip_size_kb": 64, 00:15:05.087 "state": "configuring", 00:15:05.087 "raid_level": "raid5f", 00:15:05.087 "superblock": true, 00:15:05.087 "num_base_bdevs": 4, 00:15:05.087 "num_base_bdevs_discovered": 2, 00:15:05.087 "num_base_bdevs_operational": 3, 00:15:05.087 "base_bdevs_list": [ 00:15:05.087 { 00:15:05.087 "name": null, 00:15:05.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.087 "is_configured": false, 00:15:05.087 "data_offset": 2048, 00:15:05.087 "data_size": 63488 00:15:05.087 }, 00:15:05.087 { 00:15:05.087 "name": "pt2", 00:15:05.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.087 "is_configured": true, 00:15:05.087 "data_offset": 2048, 00:15:05.087 "data_size": 63488 00:15:05.087 }, 00:15:05.087 { 00:15:05.087 "name": "pt3", 00:15:05.087 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.087 "is_configured": true, 00:15:05.087 "data_offset": 2048, 00:15:05.087 "data_size": 63488 00:15:05.087 }, 00:15:05.087 { 00:15:05.087 "name": null, 00:15:05.087 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:05.087 "is_configured": false, 00:15:05.087 "data_offset": 2048, 00:15:05.087 "data_size": 63488 00:15:05.087 } 00:15:05.087 ] 00:15:05.087 }' 00:15:05.087 16:41:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.347 16:41:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.607 [2024-12-07 16:41:04.329672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:05.607 [2024-12-07 16:41:04.329734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.607 [2024-12-07 16:41:04.329758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:05.607 [2024-12-07 16:41:04.329770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.607 [2024-12-07 16:41:04.330161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.607 [2024-12-07 16:41:04.330189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:05.607 [2024-12-07 16:41:04.330261] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:05.607 [2024-12-07 16:41:04.330284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:05.607 [2024-12-07 16:41:04.330405] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:05.607 [2024-12-07 16:41:04.330461] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:05.607 [2024-12-07 16:41:04.330713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:05.607 [2024-12-07 16:41:04.331260] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:05.607 [2024-12-07 16:41:04.331272] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:05.607 [2024-12-07 16:41:04.331548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.607 pt4 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.607 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.607 "name": "raid_bdev1", 00:15:05.607 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:05.607 "strip_size_kb": 64, 00:15:05.607 "state": "online", 00:15:05.607 "raid_level": "raid5f", 00:15:05.607 "superblock": true, 00:15:05.607 "num_base_bdevs": 4, 00:15:05.607 "num_base_bdevs_discovered": 3, 00:15:05.607 "num_base_bdevs_operational": 3, 00:15:05.607 "base_bdevs_list": [ 00:15:05.607 { 00:15:05.607 "name": null, 00:15:05.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.607 "is_configured": false, 00:15:05.607 "data_offset": 2048, 00:15:05.607 "data_size": 63488 00:15:05.607 }, 00:15:05.607 { 00:15:05.607 "name": "pt2", 00:15:05.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.607 "is_configured": true, 00:15:05.607 "data_offset": 2048, 00:15:05.607 "data_size": 63488 00:15:05.607 }, 00:15:05.607 { 00:15:05.607 "name": "pt3", 00:15:05.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.607 "is_configured": true, 00:15:05.608 "data_offset": 2048, 00:15:05.608 "data_size": 63488 00:15:05.608 }, 00:15:05.608 { 00:15:05.608 "name": "pt4", 00:15:05.608 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:05.608 "is_configured": true, 00:15:05.608 "data_offset": 2048, 00:15:05.608 "data_size": 63488 00:15:05.608 } 00:15:05.608 ] 00:15:05.608 }' 00:15:05.608 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.608 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.868 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:05.868 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.868 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.868 [2024-12-07 16:41:04.721496] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.868 [2024-12-07 16:41:04.721567] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.868 [2024-12-07 16:41:04.721648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.868 [2024-12-07 16:41:04.721733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.868 [2024-12-07 16:41:04.721801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:05.868 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.868 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.868 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:05.868 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.868 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.868 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.129 [2024-12-07 16:41:04.797399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:06.129 [2024-12-07 16:41:04.797451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.129 [2024-12-07 16:41:04.797472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:06.129 [2024-12-07 16:41:04.797481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.129 [2024-12-07 16:41:04.799985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.129 [2024-12-07 16:41:04.800060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:06.129 [2024-12-07 16:41:04.800137] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:06.129 [2024-12-07 16:41:04.800183] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:06.129 [2024-12-07 16:41:04.800292] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:06.129 [2024-12-07 16:41:04.800304] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.129 [2024-12-07 16:41:04.800322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:06.129 [2024-12-07 16:41:04.800378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:06.129 [2024-12-07 16:41:04.800501] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:06.129 pt1 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.129 "name": "raid_bdev1", 00:15:06.129 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:06.129 "strip_size_kb": 64, 00:15:06.129 "state": "configuring", 00:15:06.129 "raid_level": "raid5f", 00:15:06.129 "superblock": true, 00:15:06.129 "num_base_bdevs": 4, 00:15:06.129 "num_base_bdevs_discovered": 2, 00:15:06.129 "num_base_bdevs_operational": 3, 00:15:06.129 "base_bdevs_list": [ 00:15:06.129 { 00:15:06.129 "name": null, 00:15:06.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.129 "is_configured": false, 00:15:06.129 "data_offset": 2048, 00:15:06.129 "data_size": 63488 00:15:06.129 }, 00:15:06.129 { 00:15:06.129 "name": "pt2", 00:15:06.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.129 "is_configured": true, 00:15:06.129 "data_offset": 2048, 00:15:06.129 "data_size": 63488 00:15:06.129 }, 00:15:06.129 { 00:15:06.129 "name": "pt3", 00:15:06.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.129 "is_configured": true, 00:15:06.129 "data_offset": 2048, 00:15:06.129 "data_size": 63488 00:15:06.129 }, 00:15:06.129 { 00:15:06.129 "name": null, 00:15:06.129 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:06.129 "is_configured": false, 00:15:06.129 "data_offset": 2048, 00:15:06.129 "data_size": 63488 00:15:06.129 } 00:15:06.129 ] 00:15:06.129 }' 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.129 16:41:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.395 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:06.395 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:06.395 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.395 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.656 [2024-12-07 16:41:05.320468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:06.656 [2024-12-07 16:41:05.320564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.656 [2024-12-07 16:41:05.320600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:06.656 [2024-12-07 16:41:05.320631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.656 [2024-12-07 16:41:05.321038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.656 [2024-12-07 16:41:05.321098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:06.656 [2024-12-07 16:41:05.321182] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:06.656 [2024-12-07 16:41:05.321230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:06.656 [2024-12-07 16:41:05.321357] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:06.656 [2024-12-07 16:41:05.321399] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:06.656 [2024-12-07 16:41:05.321656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:06.656 [2024-12-07 16:41:05.322236] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:06.656 [2024-12-07 16:41:05.322281] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:06.656 [2024-12-07 16:41:05.322506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.656 pt4 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.656 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.657 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.657 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.657 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.657 "name": "raid_bdev1", 00:15:06.657 "uuid": "73e79995-1950-40b3-80c4-3d0680739467", 00:15:06.657 "strip_size_kb": 64, 00:15:06.657 "state": "online", 00:15:06.657 "raid_level": "raid5f", 00:15:06.657 "superblock": true, 00:15:06.657 "num_base_bdevs": 4, 00:15:06.657 "num_base_bdevs_discovered": 3, 00:15:06.657 "num_base_bdevs_operational": 3, 00:15:06.657 "base_bdevs_list": [ 00:15:06.657 { 00:15:06.657 "name": null, 00:15:06.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.657 "is_configured": false, 00:15:06.657 "data_offset": 2048, 00:15:06.657 "data_size": 63488 00:15:06.657 }, 00:15:06.657 { 00:15:06.657 "name": "pt2", 00:15:06.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.657 "is_configured": true, 00:15:06.657 "data_offset": 2048, 00:15:06.657 "data_size": 63488 00:15:06.657 }, 00:15:06.657 { 00:15:06.657 "name": "pt3", 00:15:06.657 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.657 "is_configured": true, 00:15:06.657 "data_offset": 2048, 00:15:06.657 "data_size": 63488 00:15:06.657 }, 00:15:06.657 { 00:15:06.657 "name": "pt4", 00:15:06.657 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:06.657 "is_configured": true, 00:15:06.657 "data_offset": 2048, 00:15:06.657 "data_size": 63488 00:15:06.657 } 00:15:06.657 ] 00:15:06.657 }' 00:15:06.657 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.657 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.917 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:06.917 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:06.917 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.917 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.917 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.917 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:06.917 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:06.917 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:06.917 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.917 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.917 [2024-12-07 16:41:05.788627] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.917 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.176 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 73e79995-1950-40b3-80c4-3d0680739467 '!=' 73e79995-1950-40b3-80c4-3d0680739467 ']' 00:15:07.176 16:41:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94880 00:15:07.176 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94880 ']' 00:15:07.176 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94880 00:15:07.176 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:07.176 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.176 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94880 00:15:07.176 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:07.176 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:07.176 killing process with pid 94880 00:15:07.176 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94880' 00:15:07.176 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94880 00:15:07.176 [2024-12-07 16:41:05.867067] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.176 [2024-12-07 16:41:05.867173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.176 16:41:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94880 00:15:07.176 [2024-12-07 16:41:05.867259] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.176 [2024-12-07 16:41:05.867270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:07.176 [2024-12-07 16:41:05.945067] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.436 16:41:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:07.436 00:15:07.436 real 0m7.347s 00:15:07.436 user 0m12.098s 00:15:07.436 sys 0m1.669s 00:15:07.436 16:41:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.436 16:41:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.437 ************************************ 00:15:07.437 END TEST raid5f_superblock_test 00:15:07.437 ************************************ 00:15:07.697 16:41:06 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:07.697 16:41:06 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:07.697 16:41:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:07.697 16:41:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.697 16:41:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.697 ************************************ 00:15:07.697 START TEST raid5f_rebuild_test 00:15:07.697 ************************************ 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95349 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95349 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 95349 ']' 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.697 16:41:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.697 [2024-12-07 16:41:06.528311] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:07.697 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:07.697 Zero copy mechanism will not be used. 00:15:07.697 [2024-12-07 16:41:06.528546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95349 ] 00:15:07.958 [2024-12-07 16:41:06.691684] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.958 [2024-12-07 16:41:06.767646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.958 [2024-12-07 16:41:06.843591] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.958 [2024-12-07 16:41:06.843632] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.528 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.528 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:08.528 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.529 BaseBdev1_malloc 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.529 [2024-12-07 16:41:07.366032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:08.529 [2024-12-07 16:41:07.366104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.529 [2024-12-07 16:41:07.366133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:08.529 [2024-12-07 16:41:07.366157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.529 [2024-12-07 16:41:07.368598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.529 [2024-12-07 16:41:07.368710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:08.529 BaseBdev1 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.529 BaseBdev2_malloc 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.529 [2024-12-07 16:41:07.408187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:08.529 [2024-12-07 16:41:07.408239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.529 [2024-12-07 16:41:07.408262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:08.529 [2024-12-07 16:41:07.408271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.529 [2024-12-07 16:41:07.410565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.529 [2024-12-07 16:41:07.410662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:08.529 BaseBdev2 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.529 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.790 BaseBdev3_malloc 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.790 [2024-12-07 16:41:07.442777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:08.790 [2024-12-07 16:41:07.442821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.790 [2024-12-07 16:41:07.442846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:08.790 [2024-12-07 16:41:07.442855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.790 [2024-12-07 16:41:07.445193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.790 [2024-12-07 16:41:07.445294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:08.790 BaseBdev3 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.790 BaseBdev4_malloc 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.790 [2024-12-07 16:41:07.477291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:08.790 [2024-12-07 16:41:07.477349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.790 [2024-12-07 16:41:07.477375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:08.790 [2024-12-07 16:41:07.477383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.790 [2024-12-07 16:41:07.479707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.790 [2024-12-07 16:41:07.479802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:08.790 BaseBdev4 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.790 spare_malloc 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.790 spare_delay 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.790 [2024-12-07 16:41:07.523689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:08.790 [2024-12-07 16:41:07.523736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.790 [2024-12-07 16:41:07.523757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:08.790 [2024-12-07 16:41:07.523766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.790 [2024-12-07 16:41:07.526075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.790 [2024-12-07 16:41:07.526169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:08.790 spare 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.790 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.791 [2024-12-07 16:41:07.535744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.791 [2024-12-07 16:41:07.537837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.791 [2024-12-07 16:41:07.537905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:08.791 [2024-12-07 16:41:07.537944] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:08.791 [2024-12-07 16:41:07.538049] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:08.791 [2024-12-07 16:41:07.538058] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:08.791 [2024-12-07 16:41:07.538306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:08.791 [2024-12-07 16:41:07.538793] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:08.791 [2024-12-07 16:41:07.538809] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:08.791 [2024-12-07 16:41:07.538917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.791 "name": "raid_bdev1", 00:15:08.791 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:08.791 "strip_size_kb": 64, 00:15:08.791 "state": "online", 00:15:08.791 "raid_level": "raid5f", 00:15:08.791 "superblock": false, 00:15:08.791 "num_base_bdevs": 4, 00:15:08.791 "num_base_bdevs_discovered": 4, 00:15:08.791 "num_base_bdevs_operational": 4, 00:15:08.791 "base_bdevs_list": [ 00:15:08.791 { 00:15:08.791 "name": "BaseBdev1", 00:15:08.791 "uuid": "efb5f682-6ea7-5f25-b3e5-f6a144e3973c", 00:15:08.791 "is_configured": true, 00:15:08.791 "data_offset": 0, 00:15:08.791 "data_size": 65536 00:15:08.791 }, 00:15:08.791 { 00:15:08.791 "name": "BaseBdev2", 00:15:08.791 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:08.791 "is_configured": true, 00:15:08.791 "data_offset": 0, 00:15:08.791 "data_size": 65536 00:15:08.791 }, 00:15:08.791 { 00:15:08.791 "name": "BaseBdev3", 00:15:08.791 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:08.791 "is_configured": true, 00:15:08.791 "data_offset": 0, 00:15:08.791 "data_size": 65536 00:15:08.791 }, 00:15:08.791 { 00:15:08.791 "name": "BaseBdev4", 00:15:08.791 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:08.791 "is_configured": true, 00:15:08.791 "data_offset": 0, 00:15:08.791 "data_size": 65536 00:15:08.791 } 00:15:08.791 ] 00:15:08.791 }' 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.791 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.361 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.361 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.361 16:41:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.361 16:41:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:09.361 [2024-12-07 16:41:08.001250] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.361 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:09.621 [2024-12-07 16:41:08.280584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:09.621 /dev/nbd0 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.621 1+0 records in 00:15:09.621 1+0 records out 00:15:09.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416531 s, 9.8 MB/s 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:09.621 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:10.191 512+0 records in 00:15:10.191 512+0 records out 00:15:10.191 100663296 bytes (101 MB, 96 MiB) copied, 0.441776 s, 228 MB/s 00:15:10.191 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:10.191 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.191 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:10.191 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:10.191 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:10.191 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.191 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:10.191 16:41:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:10.191 [2024-12-07 16:41:09.000533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.191 [2024-12-07 16:41:09.019247] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.191 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.191 "name": "raid_bdev1", 00:15:10.191 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:10.191 "strip_size_kb": 64, 00:15:10.191 "state": "online", 00:15:10.191 "raid_level": "raid5f", 00:15:10.191 "superblock": false, 00:15:10.191 "num_base_bdevs": 4, 00:15:10.191 "num_base_bdevs_discovered": 3, 00:15:10.191 "num_base_bdevs_operational": 3, 00:15:10.191 "base_bdevs_list": [ 00:15:10.191 { 00:15:10.191 "name": null, 00:15:10.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.191 "is_configured": false, 00:15:10.192 "data_offset": 0, 00:15:10.192 "data_size": 65536 00:15:10.192 }, 00:15:10.192 { 00:15:10.192 "name": "BaseBdev2", 00:15:10.192 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:10.192 "is_configured": true, 00:15:10.192 "data_offset": 0, 00:15:10.192 "data_size": 65536 00:15:10.192 }, 00:15:10.192 { 00:15:10.192 "name": "BaseBdev3", 00:15:10.192 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:10.192 "is_configured": true, 00:15:10.192 "data_offset": 0, 00:15:10.192 "data_size": 65536 00:15:10.192 }, 00:15:10.192 { 00:15:10.192 "name": "BaseBdev4", 00:15:10.192 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:10.192 "is_configured": true, 00:15:10.192 "data_offset": 0, 00:15:10.192 "data_size": 65536 00:15:10.192 } 00:15:10.192 ] 00:15:10.192 }' 00:15:10.192 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.192 16:41:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.759 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:10.759 16:41:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.759 16:41:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.759 [2024-12-07 16:41:09.474485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.759 [2024-12-07 16:41:09.480363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:15:10.759 16:41:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.759 16:41:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:10.759 [2024-12-07 16:41:09.482818] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.696 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.696 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.696 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.696 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.696 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.696 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.696 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.696 16:41:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.696 16:41:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.696 16:41:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.696 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.696 "name": "raid_bdev1", 00:15:11.697 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:11.697 "strip_size_kb": 64, 00:15:11.697 "state": "online", 00:15:11.697 "raid_level": "raid5f", 00:15:11.697 "superblock": false, 00:15:11.697 "num_base_bdevs": 4, 00:15:11.697 "num_base_bdevs_discovered": 4, 00:15:11.697 "num_base_bdevs_operational": 4, 00:15:11.697 "process": { 00:15:11.697 "type": "rebuild", 00:15:11.697 "target": "spare", 00:15:11.697 "progress": { 00:15:11.697 "blocks": 19200, 00:15:11.697 "percent": 9 00:15:11.697 } 00:15:11.697 }, 00:15:11.697 "base_bdevs_list": [ 00:15:11.697 { 00:15:11.697 "name": "spare", 00:15:11.697 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:11.697 "is_configured": true, 00:15:11.697 "data_offset": 0, 00:15:11.697 "data_size": 65536 00:15:11.697 }, 00:15:11.697 { 00:15:11.697 "name": "BaseBdev2", 00:15:11.697 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:11.697 "is_configured": true, 00:15:11.697 "data_offset": 0, 00:15:11.697 "data_size": 65536 00:15:11.697 }, 00:15:11.697 { 00:15:11.697 "name": "BaseBdev3", 00:15:11.697 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:11.697 "is_configured": true, 00:15:11.697 "data_offset": 0, 00:15:11.697 "data_size": 65536 00:15:11.697 }, 00:15:11.697 { 00:15:11.697 "name": "BaseBdev4", 00:15:11.697 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:11.697 "is_configured": true, 00:15:11.697 "data_offset": 0, 00:15:11.697 "data_size": 65536 00:15:11.697 } 00:15:11.697 ] 00:15:11.697 }' 00:15:11.697 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.697 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.697 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.957 [2024-12-07 16:41:10.623185] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.957 [2024-12-07 16:41:10.689476] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:11.957 [2024-12-07 16:41:10.689531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.957 [2024-12-07 16:41:10.689550] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.957 [2024-12-07 16:41:10.689558] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.957 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.957 "name": "raid_bdev1", 00:15:11.957 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:11.957 "strip_size_kb": 64, 00:15:11.957 "state": "online", 00:15:11.957 "raid_level": "raid5f", 00:15:11.957 "superblock": false, 00:15:11.957 "num_base_bdevs": 4, 00:15:11.957 "num_base_bdevs_discovered": 3, 00:15:11.957 "num_base_bdevs_operational": 3, 00:15:11.957 "base_bdevs_list": [ 00:15:11.957 { 00:15:11.957 "name": null, 00:15:11.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.957 "is_configured": false, 00:15:11.957 "data_offset": 0, 00:15:11.957 "data_size": 65536 00:15:11.957 }, 00:15:11.957 { 00:15:11.957 "name": "BaseBdev2", 00:15:11.957 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:11.957 "is_configured": true, 00:15:11.957 "data_offset": 0, 00:15:11.958 "data_size": 65536 00:15:11.958 }, 00:15:11.958 { 00:15:11.958 "name": "BaseBdev3", 00:15:11.958 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:11.958 "is_configured": true, 00:15:11.958 "data_offset": 0, 00:15:11.958 "data_size": 65536 00:15:11.958 }, 00:15:11.958 { 00:15:11.958 "name": "BaseBdev4", 00:15:11.958 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:11.958 "is_configured": true, 00:15:11.958 "data_offset": 0, 00:15:11.958 "data_size": 65536 00:15:11.958 } 00:15:11.958 ] 00:15:11.958 }' 00:15:11.958 16:41:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.958 16:41:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.528 "name": "raid_bdev1", 00:15:12.528 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:12.528 "strip_size_kb": 64, 00:15:12.528 "state": "online", 00:15:12.528 "raid_level": "raid5f", 00:15:12.528 "superblock": false, 00:15:12.528 "num_base_bdevs": 4, 00:15:12.528 "num_base_bdevs_discovered": 3, 00:15:12.528 "num_base_bdevs_operational": 3, 00:15:12.528 "base_bdevs_list": [ 00:15:12.528 { 00:15:12.528 "name": null, 00:15:12.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.528 "is_configured": false, 00:15:12.528 "data_offset": 0, 00:15:12.528 "data_size": 65536 00:15:12.528 }, 00:15:12.528 { 00:15:12.528 "name": "BaseBdev2", 00:15:12.528 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:12.528 "is_configured": true, 00:15:12.528 "data_offset": 0, 00:15:12.528 "data_size": 65536 00:15:12.528 }, 00:15:12.528 { 00:15:12.528 "name": "BaseBdev3", 00:15:12.528 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:12.528 "is_configured": true, 00:15:12.528 "data_offset": 0, 00:15:12.528 "data_size": 65536 00:15:12.528 }, 00:15:12.528 { 00:15:12.528 "name": "BaseBdev4", 00:15:12.528 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:12.528 "is_configured": true, 00:15:12.528 "data_offset": 0, 00:15:12.528 "data_size": 65536 00:15:12.528 } 00:15:12.528 ] 00:15:12.528 }' 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.528 [2024-12-07 16:41:11.265192] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.528 [2024-12-07 16:41:11.270426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.528 16:41:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:12.528 [2024-12-07 16:41:11.272908] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.467 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.467 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.467 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.467 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.467 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.467 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.467 16:41:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.467 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.467 16:41:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.467 16:41:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.467 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.467 "name": "raid_bdev1", 00:15:13.467 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:13.467 "strip_size_kb": 64, 00:15:13.467 "state": "online", 00:15:13.467 "raid_level": "raid5f", 00:15:13.467 "superblock": false, 00:15:13.467 "num_base_bdevs": 4, 00:15:13.467 "num_base_bdevs_discovered": 4, 00:15:13.467 "num_base_bdevs_operational": 4, 00:15:13.467 "process": { 00:15:13.468 "type": "rebuild", 00:15:13.468 "target": "spare", 00:15:13.468 "progress": { 00:15:13.468 "blocks": 19200, 00:15:13.468 "percent": 9 00:15:13.468 } 00:15:13.468 }, 00:15:13.468 "base_bdevs_list": [ 00:15:13.468 { 00:15:13.468 "name": "spare", 00:15:13.468 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:13.468 "is_configured": true, 00:15:13.468 "data_offset": 0, 00:15:13.468 "data_size": 65536 00:15:13.468 }, 00:15:13.468 { 00:15:13.468 "name": "BaseBdev2", 00:15:13.468 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:13.468 "is_configured": true, 00:15:13.468 "data_offset": 0, 00:15:13.468 "data_size": 65536 00:15:13.468 }, 00:15:13.468 { 00:15:13.468 "name": "BaseBdev3", 00:15:13.468 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:13.468 "is_configured": true, 00:15:13.468 "data_offset": 0, 00:15:13.468 "data_size": 65536 00:15:13.468 }, 00:15:13.468 { 00:15:13.468 "name": "BaseBdev4", 00:15:13.468 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:13.468 "is_configured": true, 00:15:13.468 "data_offset": 0, 00:15:13.468 "data_size": 65536 00:15:13.468 } 00:15:13.468 ] 00:15:13.468 }' 00:15:13.468 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=524 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.727 "name": "raid_bdev1", 00:15:13.727 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:13.727 "strip_size_kb": 64, 00:15:13.727 "state": "online", 00:15:13.727 "raid_level": "raid5f", 00:15:13.727 "superblock": false, 00:15:13.727 "num_base_bdevs": 4, 00:15:13.727 "num_base_bdevs_discovered": 4, 00:15:13.727 "num_base_bdevs_operational": 4, 00:15:13.727 "process": { 00:15:13.727 "type": "rebuild", 00:15:13.727 "target": "spare", 00:15:13.727 "progress": { 00:15:13.727 "blocks": 21120, 00:15:13.727 "percent": 10 00:15:13.727 } 00:15:13.727 }, 00:15:13.727 "base_bdevs_list": [ 00:15:13.727 { 00:15:13.727 "name": "spare", 00:15:13.727 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:13.727 "is_configured": true, 00:15:13.727 "data_offset": 0, 00:15:13.727 "data_size": 65536 00:15:13.727 }, 00:15:13.727 { 00:15:13.727 "name": "BaseBdev2", 00:15:13.727 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:13.727 "is_configured": true, 00:15:13.727 "data_offset": 0, 00:15:13.727 "data_size": 65536 00:15:13.727 }, 00:15:13.727 { 00:15:13.727 "name": "BaseBdev3", 00:15:13.727 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:13.727 "is_configured": true, 00:15:13.727 "data_offset": 0, 00:15:13.727 "data_size": 65536 00:15:13.727 }, 00:15:13.727 { 00:15:13.727 "name": "BaseBdev4", 00:15:13.727 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:13.727 "is_configured": true, 00:15:13.727 "data_offset": 0, 00:15:13.727 "data_size": 65536 00:15:13.727 } 00:15:13.727 ] 00:15:13.727 }' 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.727 16:41:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.113 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.113 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.113 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.113 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.113 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.113 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.114 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.114 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.114 16:41:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.114 16:41:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.114 16:41:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.114 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.114 "name": "raid_bdev1", 00:15:15.114 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:15.114 "strip_size_kb": 64, 00:15:15.114 "state": "online", 00:15:15.114 "raid_level": "raid5f", 00:15:15.114 "superblock": false, 00:15:15.114 "num_base_bdevs": 4, 00:15:15.114 "num_base_bdevs_discovered": 4, 00:15:15.114 "num_base_bdevs_operational": 4, 00:15:15.114 "process": { 00:15:15.114 "type": "rebuild", 00:15:15.114 "target": "spare", 00:15:15.114 "progress": { 00:15:15.114 "blocks": 42240, 00:15:15.114 "percent": 21 00:15:15.114 } 00:15:15.114 }, 00:15:15.114 "base_bdevs_list": [ 00:15:15.114 { 00:15:15.114 "name": "spare", 00:15:15.114 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:15.114 "is_configured": true, 00:15:15.114 "data_offset": 0, 00:15:15.114 "data_size": 65536 00:15:15.114 }, 00:15:15.114 { 00:15:15.114 "name": "BaseBdev2", 00:15:15.114 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:15.114 "is_configured": true, 00:15:15.114 "data_offset": 0, 00:15:15.114 "data_size": 65536 00:15:15.114 }, 00:15:15.114 { 00:15:15.114 "name": "BaseBdev3", 00:15:15.114 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:15.114 "is_configured": true, 00:15:15.114 "data_offset": 0, 00:15:15.114 "data_size": 65536 00:15:15.114 }, 00:15:15.114 { 00:15:15.114 "name": "BaseBdev4", 00:15:15.114 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:15.114 "is_configured": true, 00:15:15.114 "data_offset": 0, 00:15:15.114 "data_size": 65536 00:15:15.114 } 00:15:15.114 ] 00:15:15.114 }' 00:15:15.114 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.114 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.114 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.114 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.114 16:41:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.060 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.060 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.060 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.060 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.060 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.060 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.060 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.060 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.060 16:41:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.060 16:41:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.060 16:41:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.060 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.060 "name": "raid_bdev1", 00:15:16.060 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:16.060 "strip_size_kb": 64, 00:15:16.060 "state": "online", 00:15:16.060 "raid_level": "raid5f", 00:15:16.060 "superblock": false, 00:15:16.060 "num_base_bdevs": 4, 00:15:16.060 "num_base_bdevs_discovered": 4, 00:15:16.060 "num_base_bdevs_operational": 4, 00:15:16.061 "process": { 00:15:16.061 "type": "rebuild", 00:15:16.061 "target": "spare", 00:15:16.061 "progress": { 00:15:16.061 "blocks": 65280, 00:15:16.061 "percent": 33 00:15:16.061 } 00:15:16.061 }, 00:15:16.061 "base_bdevs_list": [ 00:15:16.061 { 00:15:16.061 "name": "spare", 00:15:16.061 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:16.061 "is_configured": true, 00:15:16.061 "data_offset": 0, 00:15:16.061 "data_size": 65536 00:15:16.061 }, 00:15:16.061 { 00:15:16.061 "name": "BaseBdev2", 00:15:16.061 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:16.061 "is_configured": true, 00:15:16.061 "data_offset": 0, 00:15:16.061 "data_size": 65536 00:15:16.061 }, 00:15:16.061 { 00:15:16.061 "name": "BaseBdev3", 00:15:16.061 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:16.061 "is_configured": true, 00:15:16.061 "data_offset": 0, 00:15:16.061 "data_size": 65536 00:15:16.061 }, 00:15:16.061 { 00:15:16.061 "name": "BaseBdev4", 00:15:16.061 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:16.061 "is_configured": true, 00:15:16.061 "data_offset": 0, 00:15:16.061 "data_size": 65536 00:15:16.061 } 00:15:16.061 ] 00:15:16.061 }' 00:15:16.061 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.061 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.061 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.061 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.061 16:41:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.999 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.999 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.999 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.999 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.999 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.999 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.999 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.999 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.999 16:41:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.999 16:41:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.999 16:41:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.999 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.999 "name": "raid_bdev1", 00:15:16.999 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:16.999 "strip_size_kb": 64, 00:15:16.999 "state": "online", 00:15:16.999 "raid_level": "raid5f", 00:15:16.999 "superblock": false, 00:15:16.999 "num_base_bdevs": 4, 00:15:16.999 "num_base_bdevs_discovered": 4, 00:15:16.999 "num_base_bdevs_operational": 4, 00:15:16.999 "process": { 00:15:16.999 "type": "rebuild", 00:15:16.999 "target": "spare", 00:15:16.999 "progress": { 00:15:16.999 "blocks": 86400, 00:15:16.999 "percent": 43 00:15:16.999 } 00:15:16.999 }, 00:15:16.999 "base_bdevs_list": [ 00:15:16.999 { 00:15:16.999 "name": "spare", 00:15:16.999 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:16.999 "is_configured": true, 00:15:16.999 "data_offset": 0, 00:15:16.999 "data_size": 65536 00:15:16.999 }, 00:15:16.999 { 00:15:16.999 "name": "BaseBdev2", 00:15:16.999 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:16.999 "is_configured": true, 00:15:16.999 "data_offset": 0, 00:15:16.999 "data_size": 65536 00:15:16.999 }, 00:15:16.999 { 00:15:16.999 "name": "BaseBdev3", 00:15:16.999 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:16.999 "is_configured": true, 00:15:16.999 "data_offset": 0, 00:15:16.999 "data_size": 65536 00:15:16.999 }, 00:15:16.999 { 00:15:16.999 "name": "BaseBdev4", 00:15:16.999 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:16.999 "is_configured": true, 00:15:16.999 "data_offset": 0, 00:15:16.999 "data_size": 65536 00:15:16.999 } 00:15:16.999 ] 00:15:16.999 }' 00:15:17.258 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.258 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.258 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.258 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.258 16:41:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.194 16:41:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.194 16:41:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.194 16:41:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.194 16:41:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.194 16:41:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.194 16:41:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.194 16:41:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.194 16:41:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.194 16:41:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.194 16:41:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.194 16:41:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.194 16:41:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.194 "name": "raid_bdev1", 00:15:18.194 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:18.194 "strip_size_kb": 64, 00:15:18.194 "state": "online", 00:15:18.194 "raid_level": "raid5f", 00:15:18.194 "superblock": false, 00:15:18.194 "num_base_bdevs": 4, 00:15:18.194 "num_base_bdevs_discovered": 4, 00:15:18.194 "num_base_bdevs_operational": 4, 00:15:18.194 "process": { 00:15:18.194 "type": "rebuild", 00:15:18.194 "target": "spare", 00:15:18.194 "progress": { 00:15:18.194 "blocks": 109440, 00:15:18.194 "percent": 55 00:15:18.194 } 00:15:18.194 }, 00:15:18.194 "base_bdevs_list": [ 00:15:18.194 { 00:15:18.194 "name": "spare", 00:15:18.194 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:18.194 "is_configured": true, 00:15:18.194 "data_offset": 0, 00:15:18.194 "data_size": 65536 00:15:18.194 }, 00:15:18.194 { 00:15:18.194 "name": "BaseBdev2", 00:15:18.194 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:18.194 "is_configured": true, 00:15:18.194 "data_offset": 0, 00:15:18.194 "data_size": 65536 00:15:18.194 }, 00:15:18.194 { 00:15:18.194 "name": "BaseBdev3", 00:15:18.194 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:18.194 "is_configured": true, 00:15:18.194 "data_offset": 0, 00:15:18.194 "data_size": 65536 00:15:18.194 }, 00:15:18.194 { 00:15:18.194 "name": "BaseBdev4", 00:15:18.194 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:18.194 "is_configured": true, 00:15:18.194 "data_offset": 0, 00:15:18.194 "data_size": 65536 00:15:18.194 } 00:15:18.194 ] 00:15:18.194 }' 00:15:18.194 16:41:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.453 16:41:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.453 16:41:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.453 16:41:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.453 16:41:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.390 "name": "raid_bdev1", 00:15:19.390 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:19.390 "strip_size_kb": 64, 00:15:19.390 "state": "online", 00:15:19.390 "raid_level": "raid5f", 00:15:19.390 "superblock": false, 00:15:19.390 "num_base_bdevs": 4, 00:15:19.390 "num_base_bdevs_discovered": 4, 00:15:19.390 "num_base_bdevs_operational": 4, 00:15:19.390 "process": { 00:15:19.390 "type": "rebuild", 00:15:19.390 "target": "spare", 00:15:19.390 "progress": { 00:15:19.390 "blocks": 130560, 00:15:19.390 "percent": 66 00:15:19.390 } 00:15:19.390 }, 00:15:19.390 "base_bdevs_list": [ 00:15:19.390 { 00:15:19.390 "name": "spare", 00:15:19.390 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:19.390 "is_configured": true, 00:15:19.390 "data_offset": 0, 00:15:19.390 "data_size": 65536 00:15:19.390 }, 00:15:19.390 { 00:15:19.390 "name": "BaseBdev2", 00:15:19.390 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:19.390 "is_configured": true, 00:15:19.390 "data_offset": 0, 00:15:19.390 "data_size": 65536 00:15:19.390 }, 00:15:19.390 { 00:15:19.390 "name": "BaseBdev3", 00:15:19.390 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:19.390 "is_configured": true, 00:15:19.390 "data_offset": 0, 00:15:19.390 "data_size": 65536 00:15:19.390 }, 00:15:19.390 { 00:15:19.390 "name": "BaseBdev4", 00:15:19.390 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:19.390 "is_configured": true, 00:15:19.390 "data_offset": 0, 00:15:19.390 "data_size": 65536 00:15:19.390 } 00:15:19.390 ] 00:15:19.390 }' 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.390 16:41:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.772 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.772 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.772 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.772 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.772 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.772 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.772 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.772 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.772 16:41:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.772 16:41:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.772 16:41:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.772 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.772 "name": "raid_bdev1", 00:15:20.772 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:20.772 "strip_size_kb": 64, 00:15:20.772 "state": "online", 00:15:20.772 "raid_level": "raid5f", 00:15:20.772 "superblock": false, 00:15:20.772 "num_base_bdevs": 4, 00:15:20.772 "num_base_bdevs_discovered": 4, 00:15:20.772 "num_base_bdevs_operational": 4, 00:15:20.772 "process": { 00:15:20.772 "type": "rebuild", 00:15:20.772 "target": "spare", 00:15:20.772 "progress": { 00:15:20.772 "blocks": 151680, 00:15:20.772 "percent": 77 00:15:20.772 } 00:15:20.772 }, 00:15:20.772 "base_bdevs_list": [ 00:15:20.772 { 00:15:20.772 "name": "spare", 00:15:20.772 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:20.772 "is_configured": true, 00:15:20.772 "data_offset": 0, 00:15:20.772 "data_size": 65536 00:15:20.772 }, 00:15:20.772 { 00:15:20.772 "name": "BaseBdev2", 00:15:20.772 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:20.772 "is_configured": true, 00:15:20.772 "data_offset": 0, 00:15:20.772 "data_size": 65536 00:15:20.772 }, 00:15:20.772 { 00:15:20.772 "name": "BaseBdev3", 00:15:20.772 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:20.772 "is_configured": true, 00:15:20.772 "data_offset": 0, 00:15:20.773 "data_size": 65536 00:15:20.773 }, 00:15:20.773 { 00:15:20.773 "name": "BaseBdev4", 00:15:20.773 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:20.773 "is_configured": true, 00:15:20.773 "data_offset": 0, 00:15:20.773 "data_size": 65536 00:15:20.773 } 00:15:20.773 ] 00:15:20.773 }' 00:15:20.773 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.773 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.773 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.773 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.773 16:41:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.712 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.712 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.712 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.712 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.712 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.712 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.712 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.712 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.712 16:41:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.712 16:41:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.712 16:41:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.712 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.712 "name": "raid_bdev1", 00:15:21.712 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:21.712 "strip_size_kb": 64, 00:15:21.712 "state": "online", 00:15:21.712 "raid_level": "raid5f", 00:15:21.712 "superblock": false, 00:15:21.712 "num_base_bdevs": 4, 00:15:21.712 "num_base_bdevs_discovered": 4, 00:15:21.712 "num_base_bdevs_operational": 4, 00:15:21.712 "process": { 00:15:21.712 "type": "rebuild", 00:15:21.712 "target": "spare", 00:15:21.712 "progress": { 00:15:21.712 "blocks": 174720, 00:15:21.712 "percent": 88 00:15:21.712 } 00:15:21.712 }, 00:15:21.712 "base_bdevs_list": [ 00:15:21.712 { 00:15:21.712 "name": "spare", 00:15:21.712 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:21.712 "is_configured": true, 00:15:21.712 "data_offset": 0, 00:15:21.712 "data_size": 65536 00:15:21.712 }, 00:15:21.713 { 00:15:21.713 "name": "BaseBdev2", 00:15:21.713 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:21.713 "is_configured": true, 00:15:21.713 "data_offset": 0, 00:15:21.713 "data_size": 65536 00:15:21.713 }, 00:15:21.713 { 00:15:21.713 "name": "BaseBdev3", 00:15:21.713 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:21.713 "is_configured": true, 00:15:21.713 "data_offset": 0, 00:15:21.713 "data_size": 65536 00:15:21.713 }, 00:15:21.713 { 00:15:21.713 "name": "BaseBdev4", 00:15:21.713 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:21.713 "is_configured": true, 00:15:21.713 "data_offset": 0, 00:15:21.713 "data_size": 65536 00:15:21.713 } 00:15:21.713 ] 00:15:21.713 }' 00:15:21.713 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.713 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.713 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.713 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.713 16:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.095 [2024-12-07 16:41:21.621022] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:23.095 [2024-12-07 16:41:21.621140] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:23.095 [2024-12-07 16:41:21.621206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.095 "name": "raid_bdev1", 00:15:23.095 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:23.095 "strip_size_kb": 64, 00:15:23.095 "state": "online", 00:15:23.095 "raid_level": "raid5f", 00:15:23.095 "superblock": false, 00:15:23.095 "num_base_bdevs": 4, 00:15:23.095 "num_base_bdevs_discovered": 4, 00:15:23.095 "num_base_bdevs_operational": 4, 00:15:23.095 "process": { 00:15:23.095 "type": "rebuild", 00:15:23.095 "target": "spare", 00:15:23.095 "progress": { 00:15:23.095 "blocks": 195840, 00:15:23.095 "percent": 99 00:15:23.095 } 00:15:23.095 }, 00:15:23.095 "base_bdevs_list": [ 00:15:23.095 { 00:15:23.095 "name": "spare", 00:15:23.095 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:23.095 "is_configured": true, 00:15:23.095 "data_offset": 0, 00:15:23.095 "data_size": 65536 00:15:23.095 }, 00:15:23.095 { 00:15:23.095 "name": "BaseBdev2", 00:15:23.095 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:23.095 "is_configured": true, 00:15:23.095 "data_offset": 0, 00:15:23.095 "data_size": 65536 00:15:23.095 }, 00:15:23.095 { 00:15:23.095 "name": "BaseBdev3", 00:15:23.095 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:23.095 "is_configured": true, 00:15:23.095 "data_offset": 0, 00:15:23.095 "data_size": 65536 00:15:23.095 }, 00:15:23.095 { 00:15:23.095 "name": "BaseBdev4", 00:15:23.095 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:23.095 "is_configured": true, 00:15:23.095 "data_offset": 0, 00:15:23.095 "data_size": 65536 00:15:23.095 } 00:15:23.095 ] 00:15:23.095 }' 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.095 16:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.035 "name": "raid_bdev1", 00:15:24.035 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:24.035 "strip_size_kb": 64, 00:15:24.035 "state": "online", 00:15:24.035 "raid_level": "raid5f", 00:15:24.035 "superblock": false, 00:15:24.035 "num_base_bdevs": 4, 00:15:24.035 "num_base_bdevs_discovered": 4, 00:15:24.035 "num_base_bdevs_operational": 4, 00:15:24.035 "base_bdevs_list": [ 00:15:24.035 { 00:15:24.035 "name": "spare", 00:15:24.035 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:24.035 "is_configured": true, 00:15:24.035 "data_offset": 0, 00:15:24.035 "data_size": 65536 00:15:24.035 }, 00:15:24.035 { 00:15:24.035 "name": "BaseBdev2", 00:15:24.035 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:24.035 "is_configured": true, 00:15:24.035 "data_offset": 0, 00:15:24.035 "data_size": 65536 00:15:24.035 }, 00:15:24.035 { 00:15:24.035 "name": "BaseBdev3", 00:15:24.035 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:24.035 "is_configured": true, 00:15:24.035 "data_offset": 0, 00:15:24.035 "data_size": 65536 00:15:24.035 }, 00:15:24.035 { 00:15:24.035 "name": "BaseBdev4", 00:15:24.035 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:24.035 "is_configured": true, 00:15:24.035 "data_offset": 0, 00:15:24.035 "data_size": 65536 00:15:24.035 } 00:15:24.035 ] 00:15:24.035 }' 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.035 16:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.295 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.295 "name": "raid_bdev1", 00:15:24.295 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:24.295 "strip_size_kb": 64, 00:15:24.295 "state": "online", 00:15:24.295 "raid_level": "raid5f", 00:15:24.295 "superblock": false, 00:15:24.295 "num_base_bdevs": 4, 00:15:24.295 "num_base_bdevs_discovered": 4, 00:15:24.295 "num_base_bdevs_operational": 4, 00:15:24.295 "base_bdevs_list": [ 00:15:24.295 { 00:15:24.295 "name": "spare", 00:15:24.295 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:24.295 "is_configured": true, 00:15:24.295 "data_offset": 0, 00:15:24.295 "data_size": 65536 00:15:24.295 }, 00:15:24.295 { 00:15:24.295 "name": "BaseBdev2", 00:15:24.295 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:24.295 "is_configured": true, 00:15:24.295 "data_offset": 0, 00:15:24.295 "data_size": 65536 00:15:24.295 }, 00:15:24.295 { 00:15:24.295 "name": "BaseBdev3", 00:15:24.295 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:24.295 "is_configured": true, 00:15:24.295 "data_offset": 0, 00:15:24.295 "data_size": 65536 00:15:24.295 }, 00:15:24.295 { 00:15:24.295 "name": "BaseBdev4", 00:15:24.295 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:24.295 "is_configured": true, 00:15:24.295 "data_offset": 0, 00:15:24.295 "data_size": 65536 00:15:24.295 } 00:15:24.295 ] 00:15:24.296 }' 00:15:24.296 16:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.296 "name": "raid_bdev1", 00:15:24.296 "uuid": "04067ba6-bd9e-4778-949e-12693072ac91", 00:15:24.296 "strip_size_kb": 64, 00:15:24.296 "state": "online", 00:15:24.296 "raid_level": "raid5f", 00:15:24.296 "superblock": false, 00:15:24.296 "num_base_bdevs": 4, 00:15:24.296 "num_base_bdevs_discovered": 4, 00:15:24.296 "num_base_bdevs_operational": 4, 00:15:24.296 "base_bdevs_list": [ 00:15:24.296 { 00:15:24.296 "name": "spare", 00:15:24.296 "uuid": "ba3b1b5a-a6b4-5d7f-a000-d11a785be416", 00:15:24.296 "is_configured": true, 00:15:24.296 "data_offset": 0, 00:15:24.296 "data_size": 65536 00:15:24.296 }, 00:15:24.296 { 00:15:24.296 "name": "BaseBdev2", 00:15:24.296 "uuid": "3ac7f307-bcce-5e35-86cc-a964f895632b", 00:15:24.296 "is_configured": true, 00:15:24.296 "data_offset": 0, 00:15:24.296 "data_size": 65536 00:15:24.296 }, 00:15:24.296 { 00:15:24.296 "name": "BaseBdev3", 00:15:24.296 "uuid": "b0503b7f-df34-50d9-b78d-2592ba57bdf9", 00:15:24.296 "is_configured": true, 00:15:24.296 "data_offset": 0, 00:15:24.296 "data_size": 65536 00:15:24.296 }, 00:15:24.296 { 00:15:24.296 "name": "BaseBdev4", 00:15:24.296 "uuid": "e3293162-0cdb-57ae-864c-8e5c4eb33419", 00:15:24.296 "is_configured": true, 00:15:24.296 "data_offset": 0, 00:15:24.296 "data_size": 65536 00:15:24.296 } 00:15:24.296 ] 00:15:24.296 }' 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.296 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.865 [2024-12-07 16:41:23.527149] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.865 [2024-12-07 16:41:23.527236] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.865 [2024-12-07 16:41:23.527425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.865 [2024-12-07 16:41:23.527548] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.865 [2024-12-07 16:41:23.527585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:24.865 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:25.124 /dev/nbd0 00:15:25.124 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:25.124 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:25.124 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:25.124 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:25.124 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.125 1+0 records in 00:15:25.125 1+0 records out 00:15:25.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520759 s, 7.9 MB/s 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:25.125 16:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:25.384 /dev/nbd1 00:15:25.384 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:25.384 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:25.384 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:25.384 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:25.384 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:25.384 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:25.384 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.385 1+0 records in 00:15:25.385 1+0 records out 00:15:25.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429939 s, 9.5 MB/s 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.385 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:25.644 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:25.644 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:25.644 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:25.644 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.644 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.644 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:25.644 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:25.644 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.644 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.644 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95349 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 95349 ']' 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 95349 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95349 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.904 killing process with pid 95349 00:15:25.904 Received shutdown signal, test time was about 60.000000 seconds 00:15:25.904 00:15:25.904 Latency(us) 00:15:25.904 [2024-12-07T16:41:24.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.904 [2024-12-07T16:41:24.803Z] =================================================================================================================== 00:15:25.904 [2024-12-07T16:41:24.803Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95349' 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 95349 00:15:25.904 [2024-12-07 16:41:24.690706] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:25.904 16:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 95349 00:15:25.904 [2024-12-07 16:41:24.783187] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:26.473 00:15:26.473 real 0m18.736s 00:15:26.473 user 0m22.515s 00:15:26.473 sys 0m2.443s 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.473 ************************************ 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.473 END TEST raid5f_rebuild_test 00:15:26.473 ************************************ 00:15:26.473 16:41:25 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:26.473 16:41:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:26.473 16:41:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.473 16:41:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.473 ************************************ 00:15:26.473 START TEST raid5f_rebuild_test_sb 00:15:26.473 ************************************ 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95854 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95854 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95854 ']' 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.473 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.474 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.474 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.474 16:41:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.474 [2024-12-07 16:41:25.323587] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:26.474 [2024-12-07 16:41:25.323767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:26.474 Zero copy mechanism will not be used. 00:15:26.474 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95854 ] 00:15:26.732 [2024-12-07 16:41:25.487874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.732 [2024-12-07 16:41:25.561269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.991 [2024-12-07 16:41:25.637367] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.991 [2024-12-07 16:41:25.637479] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:27.250 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:27.250 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:27.251 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:27.251 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:27.251 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.251 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.510 BaseBdev1_malloc 00:15:27.510 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.510 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:27.510 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.510 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.510 [2024-12-07 16:41:26.171882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:27.510 [2024-12-07 16:41:26.171999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.510 [2024-12-07 16:41:26.172033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:27.510 [2024-12-07 16:41:26.172051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.510 [2024-12-07 16:41:26.174409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.510 [2024-12-07 16:41:26.174445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:27.510 BaseBdev1 00:15:27.510 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.510 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:27.510 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:27.510 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.510 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.510 BaseBdev2_malloc 00:15:27.510 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.510 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:27.510 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.510 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.511 [2024-12-07 16:41:26.222808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:27.511 [2024-12-07 16:41:26.223000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.511 [2024-12-07 16:41:26.223058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:27.511 [2024-12-07 16:41:26.223081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.511 [2024-12-07 16:41:26.228209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.511 [2024-12-07 16:41:26.228278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:27.511 BaseBdev2 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.511 BaseBdev3_malloc 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.511 [2024-12-07 16:41:26.260774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:27.511 [2024-12-07 16:41:26.260820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.511 [2024-12-07 16:41:26.260846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:27.511 [2024-12-07 16:41:26.260855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.511 [2024-12-07 16:41:26.263135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.511 [2024-12-07 16:41:26.263167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:27.511 BaseBdev3 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.511 BaseBdev4_malloc 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.511 [2024-12-07 16:41:26.295043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:27.511 [2024-12-07 16:41:26.295092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.511 [2024-12-07 16:41:26.295118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:27.511 [2024-12-07 16:41:26.295126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.511 [2024-12-07 16:41:26.297396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.511 [2024-12-07 16:41:26.297425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:27.511 BaseBdev4 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.511 spare_malloc 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.511 spare_delay 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.511 [2024-12-07 16:41:26.341638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:27.511 [2024-12-07 16:41:26.341686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.511 [2024-12-07 16:41:26.341708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:27.511 [2024-12-07 16:41:26.341716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.511 [2024-12-07 16:41:26.344020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.511 [2024-12-07 16:41:26.344099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:27.511 spare 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.511 [2024-12-07 16:41:26.353720] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.511 [2024-12-07 16:41:26.355766] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.511 [2024-12-07 16:41:26.355869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.511 [2024-12-07 16:41:26.355931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:27.511 [2024-12-07 16:41:26.356127] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:27.511 [2024-12-07 16:41:26.356176] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:27.511 [2024-12-07 16:41:26.356463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:27.511 [2024-12-07 16:41:26.356964] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:27.511 [2024-12-07 16:41:26.357015] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:27.511 [2024-12-07 16:41:26.357172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.511 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.771 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.771 "name": "raid_bdev1", 00:15:27.771 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:27.771 "strip_size_kb": 64, 00:15:27.771 "state": "online", 00:15:27.771 "raid_level": "raid5f", 00:15:27.771 "superblock": true, 00:15:27.771 "num_base_bdevs": 4, 00:15:27.771 "num_base_bdevs_discovered": 4, 00:15:27.771 "num_base_bdevs_operational": 4, 00:15:27.771 "base_bdevs_list": [ 00:15:27.771 { 00:15:27.771 "name": "BaseBdev1", 00:15:27.771 "uuid": "0b2f9fc3-4f1d-5dbf-9fa5-2496e99a7760", 00:15:27.771 "is_configured": true, 00:15:27.771 "data_offset": 2048, 00:15:27.771 "data_size": 63488 00:15:27.771 }, 00:15:27.771 { 00:15:27.771 "name": "BaseBdev2", 00:15:27.771 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:27.771 "is_configured": true, 00:15:27.771 "data_offset": 2048, 00:15:27.771 "data_size": 63488 00:15:27.771 }, 00:15:27.771 { 00:15:27.771 "name": "BaseBdev3", 00:15:27.771 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:27.771 "is_configured": true, 00:15:27.771 "data_offset": 2048, 00:15:27.771 "data_size": 63488 00:15:27.771 }, 00:15:27.771 { 00:15:27.771 "name": "BaseBdev4", 00:15:27.771 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:27.771 "is_configured": true, 00:15:27.771 "data_offset": 2048, 00:15:27.771 "data_size": 63488 00:15:27.771 } 00:15:27.771 ] 00:15:27.771 }' 00:15:27.771 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.771 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.031 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.031 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.031 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.031 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:28.031 [2024-12-07 16:41:26.863581] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.031 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.031 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:28.031 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.031 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.031 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.031 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:28.031 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:28.291 16:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:28.291 [2024-12-07 16:41:27.154866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:28.291 /dev/nbd0 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.551 1+0 records in 00:15:28.551 1+0 records out 00:15:28.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467931 s, 8.8 MB/s 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:28.551 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:29.125 496+0 records in 00:15:29.125 496+0 records out 00:15:29.125 97517568 bytes (98 MB, 93 MiB) copied, 0.521102 s, 187 MB/s 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:29.125 [2024-12-07 16:41:27.971234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.125 [2024-12-07 16:41:27.991300] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.125 16:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.125 16:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.125 16:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.125 16:41:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.125 16:41:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.386 16:41:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.386 16:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.386 "name": "raid_bdev1", 00:15:29.386 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:29.386 "strip_size_kb": 64, 00:15:29.386 "state": "online", 00:15:29.386 "raid_level": "raid5f", 00:15:29.386 "superblock": true, 00:15:29.386 "num_base_bdevs": 4, 00:15:29.386 "num_base_bdevs_discovered": 3, 00:15:29.386 "num_base_bdevs_operational": 3, 00:15:29.386 "base_bdevs_list": [ 00:15:29.386 { 00:15:29.386 "name": null, 00:15:29.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.386 "is_configured": false, 00:15:29.386 "data_offset": 0, 00:15:29.386 "data_size": 63488 00:15:29.386 }, 00:15:29.386 { 00:15:29.386 "name": "BaseBdev2", 00:15:29.386 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:29.386 "is_configured": true, 00:15:29.386 "data_offset": 2048, 00:15:29.386 "data_size": 63488 00:15:29.386 }, 00:15:29.386 { 00:15:29.386 "name": "BaseBdev3", 00:15:29.386 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:29.386 "is_configured": true, 00:15:29.386 "data_offset": 2048, 00:15:29.386 "data_size": 63488 00:15:29.386 }, 00:15:29.386 { 00:15:29.386 "name": "BaseBdev4", 00:15:29.386 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:29.386 "is_configured": true, 00:15:29.386 "data_offset": 2048, 00:15:29.386 "data_size": 63488 00:15:29.386 } 00:15:29.386 ] 00:15:29.386 }' 00:15:29.386 16:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.386 16:41:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.646 16:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:29.646 16:41:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.646 16:41:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.646 [2024-12-07 16:41:28.474546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.646 [2024-12-07 16:41:28.480761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:15:29.646 16:41:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.646 16:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:29.646 [2024-12-07 16:41:28.483332] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.030 "name": "raid_bdev1", 00:15:31.030 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:31.030 "strip_size_kb": 64, 00:15:31.030 "state": "online", 00:15:31.030 "raid_level": "raid5f", 00:15:31.030 "superblock": true, 00:15:31.030 "num_base_bdevs": 4, 00:15:31.030 "num_base_bdevs_discovered": 4, 00:15:31.030 "num_base_bdevs_operational": 4, 00:15:31.030 "process": { 00:15:31.030 "type": "rebuild", 00:15:31.030 "target": "spare", 00:15:31.030 "progress": { 00:15:31.030 "blocks": 19200, 00:15:31.030 "percent": 10 00:15:31.030 } 00:15:31.030 }, 00:15:31.030 "base_bdevs_list": [ 00:15:31.030 { 00:15:31.030 "name": "spare", 00:15:31.030 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:31.030 "is_configured": true, 00:15:31.030 "data_offset": 2048, 00:15:31.030 "data_size": 63488 00:15:31.030 }, 00:15:31.030 { 00:15:31.030 "name": "BaseBdev2", 00:15:31.030 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:31.030 "is_configured": true, 00:15:31.030 "data_offset": 2048, 00:15:31.030 "data_size": 63488 00:15:31.030 }, 00:15:31.030 { 00:15:31.030 "name": "BaseBdev3", 00:15:31.030 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:31.030 "is_configured": true, 00:15:31.030 "data_offset": 2048, 00:15:31.030 "data_size": 63488 00:15:31.030 }, 00:15:31.030 { 00:15:31.030 "name": "BaseBdev4", 00:15:31.030 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:31.030 "is_configured": true, 00:15:31.030 "data_offset": 2048, 00:15:31.030 "data_size": 63488 00:15:31.030 } 00:15:31.030 ] 00:15:31.030 }' 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.030 [2024-12-07 16:41:29.635589] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.030 [2024-12-07 16:41:29.694709] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:31.030 [2024-12-07 16:41:29.694833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.030 [2024-12-07 16:41:29.694859] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.030 [2024-12-07 16:41:29.694871] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.030 "name": "raid_bdev1", 00:15:31.030 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:31.030 "strip_size_kb": 64, 00:15:31.030 "state": "online", 00:15:31.030 "raid_level": "raid5f", 00:15:31.030 "superblock": true, 00:15:31.030 "num_base_bdevs": 4, 00:15:31.030 "num_base_bdevs_discovered": 3, 00:15:31.030 "num_base_bdevs_operational": 3, 00:15:31.030 "base_bdevs_list": [ 00:15:31.030 { 00:15:31.030 "name": null, 00:15:31.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.030 "is_configured": false, 00:15:31.030 "data_offset": 0, 00:15:31.030 "data_size": 63488 00:15:31.030 }, 00:15:31.030 { 00:15:31.030 "name": "BaseBdev2", 00:15:31.030 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:31.030 "is_configured": true, 00:15:31.030 "data_offset": 2048, 00:15:31.030 "data_size": 63488 00:15:31.030 }, 00:15:31.030 { 00:15:31.030 "name": "BaseBdev3", 00:15:31.030 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:31.030 "is_configured": true, 00:15:31.030 "data_offset": 2048, 00:15:31.030 "data_size": 63488 00:15:31.030 }, 00:15:31.030 { 00:15:31.030 "name": "BaseBdev4", 00:15:31.030 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:31.030 "is_configured": true, 00:15:31.030 "data_offset": 2048, 00:15:31.030 "data_size": 63488 00:15:31.030 } 00:15:31.030 ] 00:15:31.030 }' 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.030 16:41:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.289 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.289 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.289 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.289 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.289 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.548 "name": "raid_bdev1", 00:15:31.548 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:31.548 "strip_size_kb": 64, 00:15:31.548 "state": "online", 00:15:31.548 "raid_level": "raid5f", 00:15:31.548 "superblock": true, 00:15:31.548 "num_base_bdevs": 4, 00:15:31.548 "num_base_bdevs_discovered": 3, 00:15:31.548 "num_base_bdevs_operational": 3, 00:15:31.548 "base_bdevs_list": [ 00:15:31.548 { 00:15:31.548 "name": null, 00:15:31.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.548 "is_configured": false, 00:15:31.548 "data_offset": 0, 00:15:31.548 "data_size": 63488 00:15:31.548 }, 00:15:31.548 { 00:15:31.548 "name": "BaseBdev2", 00:15:31.548 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:31.548 "is_configured": true, 00:15:31.548 "data_offset": 2048, 00:15:31.548 "data_size": 63488 00:15:31.548 }, 00:15:31.548 { 00:15:31.548 "name": "BaseBdev3", 00:15:31.548 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:31.548 "is_configured": true, 00:15:31.548 "data_offset": 2048, 00:15:31.548 "data_size": 63488 00:15:31.548 }, 00:15:31.548 { 00:15:31.548 "name": "BaseBdev4", 00:15:31.548 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:31.548 "is_configured": true, 00:15:31.548 "data_offset": 2048, 00:15:31.548 "data_size": 63488 00:15:31.548 } 00:15:31.548 ] 00:15:31.548 }' 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.548 [2024-12-07 16:41:30.319195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.548 [2024-12-07 16:41:30.325157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.548 16:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:31.548 [2024-12-07 16:41:30.327751] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:32.485 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.486 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.486 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.486 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.486 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.486 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.486 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.486 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.486 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.486 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.746 "name": "raid_bdev1", 00:15:32.746 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:32.746 "strip_size_kb": 64, 00:15:32.746 "state": "online", 00:15:32.746 "raid_level": "raid5f", 00:15:32.746 "superblock": true, 00:15:32.746 "num_base_bdevs": 4, 00:15:32.746 "num_base_bdevs_discovered": 4, 00:15:32.746 "num_base_bdevs_operational": 4, 00:15:32.746 "process": { 00:15:32.746 "type": "rebuild", 00:15:32.746 "target": "spare", 00:15:32.746 "progress": { 00:15:32.746 "blocks": 19200, 00:15:32.746 "percent": 10 00:15:32.746 } 00:15:32.746 }, 00:15:32.746 "base_bdevs_list": [ 00:15:32.746 { 00:15:32.746 "name": "spare", 00:15:32.746 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:32.746 "is_configured": true, 00:15:32.746 "data_offset": 2048, 00:15:32.746 "data_size": 63488 00:15:32.746 }, 00:15:32.746 { 00:15:32.746 "name": "BaseBdev2", 00:15:32.746 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:32.746 "is_configured": true, 00:15:32.746 "data_offset": 2048, 00:15:32.746 "data_size": 63488 00:15:32.746 }, 00:15:32.746 { 00:15:32.746 "name": "BaseBdev3", 00:15:32.746 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:32.746 "is_configured": true, 00:15:32.746 "data_offset": 2048, 00:15:32.746 "data_size": 63488 00:15:32.746 }, 00:15:32.746 { 00:15:32.746 "name": "BaseBdev4", 00:15:32.746 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:32.746 "is_configured": true, 00:15:32.746 "data_offset": 2048, 00:15:32.746 "data_size": 63488 00:15:32.746 } 00:15:32.746 ] 00:15:32.746 }' 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:32.746 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=543 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.746 "name": "raid_bdev1", 00:15:32.746 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:32.746 "strip_size_kb": 64, 00:15:32.746 "state": "online", 00:15:32.746 "raid_level": "raid5f", 00:15:32.746 "superblock": true, 00:15:32.746 "num_base_bdevs": 4, 00:15:32.746 "num_base_bdevs_discovered": 4, 00:15:32.746 "num_base_bdevs_operational": 4, 00:15:32.746 "process": { 00:15:32.746 "type": "rebuild", 00:15:32.746 "target": "spare", 00:15:32.746 "progress": { 00:15:32.746 "blocks": 21120, 00:15:32.746 "percent": 11 00:15:32.746 } 00:15:32.746 }, 00:15:32.746 "base_bdevs_list": [ 00:15:32.746 { 00:15:32.746 "name": "spare", 00:15:32.746 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:32.746 "is_configured": true, 00:15:32.746 "data_offset": 2048, 00:15:32.746 "data_size": 63488 00:15:32.746 }, 00:15:32.746 { 00:15:32.746 "name": "BaseBdev2", 00:15:32.746 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:32.746 "is_configured": true, 00:15:32.746 "data_offset": 2048, 00:15:32.746 "data_size": 63488 00:15:32.746 }, 00:15:32.746 { 00:15:32.746 "name": "BaseBdev3", 00:15:32.746 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:32.746 "is_configured": true, 00:15:32.746 "data_offset": 2048, 00:15:32.746 "data_size": 63488 00:15:32.746 }, 00:15:32.746 { 00:15:32.746 "name": "BaseBdev4", 00:15:32.746 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:32.746 "is_configured": true, 00:15:32.746 "data_offset": 2048, 00:15:32.746 "data_size": 63488 00:15:32.746 } 00:15:32.746 ] 00:15:32.746 }' 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.746 16:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.129 "name": "raid_bdev1", 00:15:34.129 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:34.129 "strip_size_kb": 64, 00:15:34.129 "state": "online", 00:15:34.129 "raid_level": "raid5f", 00:15:34.129 "superblock": true, 00:15:34.129 "num_base_bdevs": 4, 00:15:34.129 "num_base_bdevs_discovered": 4, 00:15:34.129 "num_base_bdevs_operational": 4, 00:15:34.129 "process": { 00:15:34.129 "type": "rebuild", 00:15:34.129 "target": "spare", 00:15:34.129 "progress": { 00:15:34.129 "blocks": 44160, 00:15:34.129 "percent": 23 00:15:34.129 } 00:15:34.129 }, 00:15:34.129 "base_bdevs_list": [ 00:15:34.129 { 00:15:34.129 "name": "spare", 00:15:34.129 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:34.129 "is_configured": true, 00:15:34.129 "data_offset": 2048, 00:15:34.129 "data_size": 63488 00:15:34.129 }, 00:15:34.129 { 00:15:34.129 "name": "BaseBdev2", 00:15:34.129 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:34.129 "is_configured": true, 00:15:34.129 "data_offset": 2048, 00:15:34.129 "data_size": 63488 00:15:34.129 }, 00:15:34.129 { 00:15:34.129 "name": "BaseBdev3", 00:15:34.129 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:34.129 "is_configured": true, 00:15:34.129 "data_offset": 2048, 00:15:34.129 "data_size": 63488 00:15:34.129 }, 00:15:34.129 { 00:15:34.129 "name": "BaseBdev4", 00:15:34.129 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:34.129 "is_configured": true, 00:15:34.129 "data_offset": 2048, 00:15:34.129 "data_size": 63488 00:15:34.129 } 00:15:34.129 ] 00:15:34.129 }' 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.129 16:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.070 "name": "raid_bdev1", 00:15:35.070 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:35.070 "strip_size_kb": 64, 00:15:35.070 "state": "online", 00:15:35.070 "raid_level": "raid5f", 00:15:35.070 "superblock": true, 00:15:35.070 "num_base_bdevs": 4, 00:15:35.070 "num_base_bdevs_discovered": 4, 00:15:35.070 "num_base_bdevs_operational": 4, 00:15:35.070 "process": { 00:15:35.070 "type": "rebuild", 00:15:35.070 "target": "spare", 00:15:35.070 "progress": { 00:15:35.070 "blocks": 65280, 00:15:35.070 "percent": 34 00:15:35.070 } 00:15:35.070 }, 00:15:35.070 "base_bdevs_list": [ 00:15:35.070 { 00:15:35.070 "name": "spare", 00:15:35.070 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:35.070 "is_configured": true, 00:15:35.070 "data_offset": 2048, 00:15:35.070 "data_size": 63488 00:15:35.070 }, 00:15:35.070 { 00:15:35.070 "name": "BaseBdev2", 00:15:35.070 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:35.070 "is_configured": true, 00:15:35.070 "data_offset": 2048, 00:15:35.070 "data_size": 63488 00:15:35.070 }, 00:15:35.070 { 00:15:35.070 "name": "BaseBdev3", 00:15:35.070 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:35.070 "is_configured": true, 00:15:35.070 "data_offset": 2048, 00:15:35.070 "data_size": 63488 00:15:35.070 }, 00:15:35.070 { 00:15:35.070 "name": "BaseBdev4", 00:15:35.070 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:35.070 "is_configured": true, 00:15:35.070 "data_offset": 2048, 00:15:35.070 "data_size": 63488 00:15:35.070 } 00:15:35.070 ] 00:15:35.070 }' 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.070 16:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:36.453 16:41:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.453 16:41:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.453 16:41:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.453 16:41:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.453 16:41:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.453 16:41:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.453 16:41:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.453 16:41:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.453 16:41:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.453 16:41:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.453 16:41:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.453 16:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.453 "name": "raid_bdev1", 00:15:36.453 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:36.453 "strip_size_kb": 64, 00:15:36.453 "state": "online", 00:15:36.453 "raid_level": "raid5f", 00:15:36.453 "superblock": true, 00:15:36.453 "num_base_bdevs": 4, 00:15:36.453 "num_base_bdevs_discovered": 4, 00:15:36.453 "num_base_bdevs_operational": 4, 00:15:36.453 "process": { 00:15:36.453 "type": "rebuild", 00:15:36.453 "target": "spare", 00:15:36.453 "progress": { 00:15:36.453 "blocks": 88320, 00:15:36.453 "percent": 46 00:15:36.453 } 00:15:36.453 }, 00:15:36.453 "base_bdevs_list": [ 00:15:36.453 { 00:15:36.453 "name": "spare", 00:15:36.453 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:36.453 "is_configured": true, 00:15:36.453 "data_offset": 2048, 00:15:36.453 "data_size": 63488 00:15:36.453 }, 00:15:36.453 { 00:15:36.453 "name": "BaseBdev2", 00:15:36.453 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:36.453 "is_configured": true, 00:15:36.453 "data_offset": 2048, 00:15:36.453 "data_size": 63488 00:15:36.453 }, 00:15:36.453 { 00:15:36.453 "name": "BaseBdev3", 00:15:36.453 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:36.453 "is_configured": true, 00:15:36.453 "data_offset": 2048, 00:15:36.453 "data_size": 63488 00:15:36.453 }, 00:15:36.453 { 00:15:36.453 "name": "BaseBdev4", 00:15:36.453 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:36.453 "is_configured": true, 00:15:36.453 "data_offset": 2048, 00:15:36.453 "data_size": 63488 00:15:36.453 } 00:15:36.453 ] 00:15:36.453 }' 00:15:36.453 16:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.453 16:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.453 16:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.453 16:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.453 16:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.389 "name": "raid_bdev1", 00:15:37.389 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:37.389 "strip_size_kb": 64, 00:15:37.389 "state": "online", 00:15:37.389 "raid_level": "raid5f", 00:15:37.389 "superblock": true, 00:15:37.389 "num_base_bdevs": 4, 00:15:37.389 "num_base_bdevs_discovered": 4, 00:15:37.389 "num_base_bdevs_operational": 4, 00:15:37.389 "process": { 00:15:37.389 "type": "rebuild", 00:15:37.389 "target": "spare", 00:15:37.389 "progress": { 00:15:37.389 "blocks": 109440, 00:15:37.389 "percent": 57 00:15:37.389 } 00:15:37.389 }, 00:15:37.389 "base_bdevs_list": [ 00:15:37.389 { 00:15:37.389 "name": "spare", 00:15:37.389 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:37.389 "is_configured": true, 00:15:37.389 "data_offset": 2048, 00:15:37.389 "data_size": 63488 00:15:37.389 }, 00:15:37.389 { 00:15:37.389 "name": "BaseBdev2", 00:15:37.389 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:37.389 "is_configured": true, 00:15:37.389 "data_offset": 2048, 00:15:37.389 "data_size": 63488 00:15:37.389 }, 00:15:37.389 { 00:15:37.389 "name": "BaseBdev3", 00:15:37.389 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:37.389 "is_configured": true, 00:15:37.389 "data_offset": 2048, 00:15:37.389 "data_size": 63488 00:15:37.389 }, 00:15:37.389 { 00:15:37.389 "name": "BaseBdev4", 00:15:37.389 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:37.389 "is_configured": true, 00:15:37.389 "data_offset": 2048, 00:15:37.389 "data_size": 63488 00:15:37.389 } 00:15:37.389 ] 00:15:37.389 }' 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.389 16:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:38.767 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.767 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.767 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.767 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.767 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.767 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.768 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.768 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.768 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.768 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.768 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.768 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.768 "name": "raid_bdev1", 00:15:38.768 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:38.768 "strip_size_kb": 64, 00:15:38.768 "state": "online", 00:15:38.768 "raid_level": "raid5f", 00:15:38.768 "superblock": true, 00:15:38.768 "num_base_bdevs": 4, 00:15:38.768 "num_base_bdevs_discovered": 4, 00:15:38.768 "num_base_bdevs_operational": 4, 00:15:38.768 "process": { 00:15:38.768 "type": "rebuild", 00:15:38.768 "target": "spare", 00:15:38.768 "progress": { 00:15:38.768 "blocks": 132480, 00:15:38.768 "percent": 69 00:15:38.768 } 00:15:38.768 }, 00:15:38.768 "base_bdevs_list": [ 00:15:38.768 { 00:15:38.768 "name": "spare", 00:15:38.768 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:38.768 "is_configured": true, 00:15:38.768 "data_offset": 2048, 00:15:38.768 "data_size": 63488 00:15:38.768 }, 00:15:38.768 { 00:15:38.768 "name": "BaseBdev2", 00:15:38.768 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:38.768 "is_configured": true, 00:15:38.768 "data_offset": 2048, 00:15:38.768 "data_size": 63488 00:15:38.768 }, 00:15:38.768 { 00:15:38.768 "name": "BaseBdev3", 00:15:38.768 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:38.768 "is_configured": true, 00:15:38.768 "data_offset": 2048, 00:15:38.768 "data_size": 63488 00:15:38.768 }, 00:15:38.768 { 00:15:38.768 "name": "BaseBdev4", 00:15:38.768 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:38.768 "is_configured": true, 00:15:38.768 "data_offset": 2048, 00:15:38.768 "data_size": 63488 00:15:38.768 } 00:15:38.768 ] 00:15:38.768 }' 00:15:38.768 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.768 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.768 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.768 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.768 16:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.705 "name": "raid_bdev1", 00:15:39.705 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:39.705 "strip_size_kb": 64, 00:15:39.705 "state": "online", 00:15:39.705 "raid_level": "raid5f", 00:15:39.705 "superblock": true, 00:15:39.705 "num_base_bdevs": 4, 00:15:39.705 "num_base_bdevs_discovered": 4, 00:15:39.705 "num_base_bdevs_operational": 4, 00:15:39.705 "process": { 00:15:39.705 "type": "rebuild", 00:15:39.705 "target": "spare", 00:15:39.705 "progress": { 00:15:39.705 "blocks": 153600, 00:15:39.705 "percent": 80 00:15:39.705 } 00:15:39.705 }, 00:15:39.705 "base_bdevs_list": [ 00:15:39.705 { 00:15:39.705 "name": "spare", 00:15:39.705 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:39.705 "is_configured": true, 00:15:39.705 "data_offset": 2048, 00:15:39.705 "data_size": 63488 00:15:39.705 }, 00:15:39.705 { 00:15:39.705 "name": "BaseBdev2", 00:15:39.705 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:39.705 "is_configured": true, 00:15:39.705 "data_offset": 2048, 00:15:39.705 "data_size": 63488 00:15:39.705 }, 00:15:39.705 { 00:15:39.705 "name": "BaseBdev3", 00:15:39.705 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:39.705 "is_configured": true, 00:15:39.705 "data_offset": 2048, 00:15:39.705 "data_size": 63488 00:15:39.705 }, 00:15:39.705 { 00:15:39.705 "name": "BaseBdev4", 00:15:39.705 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:39.705 "is_configured": true, 00:15:39.705 "data_offset": 2048, 00:15:39.705 "data_size": 63488 00:15:39.705 } 00:15:39.705 ] 00:15:39.705 }' 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.705 16:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.087 "name": "raid_bdev1", 00:15:41.087 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:41.087 "strip_size_kb": 64, 00:15:41.087 "state": "online", 00:15:41.087 "raid_level": "raid5f", 00:15:41.087 "superblock": true, 00:15:41.087 "num_base_bdevs": 4, 00:15:41.087 "num_base_bdevs_discovered": 4, 00:15:41.087 "num_base_bdevs_operational": 4, 00:15:41.087 "process": { 00:15:41.087 "type": "rebuild", 00:15:41.087 "target": "spare", 00:15:41.087 "progress": { 00:15:41.087 "blocks": 176640, 00:15:41.087 "percent": 92 00:15:41.087 } 00:15:41.087 }, 00:15:41.087 "base_bdevs_list": [ 00:15:41.087 { 00:15:41.087 "name": "spare", 00:15:41.087 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:41.087 "is_configured": true, 00:15:41.087 "data_offset": 2048, 00:15:41.087 "data_size": 63488 00:15:41.087 }, 00:15:41.087 { 00:15:41.087 "name": "BaseBdev2", 00:15:41.087 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:41.087 "is_configured": true, 00:15:41.087 "data_offset": 2048, 00:15:41.087 "data_size": 63488 00:15:41.087 }, 00:15:41.087 { 00:15:41.087 "name": "BaseBdev3", 00:15:41.087 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:41.087 "is_configured": true, 00:15:41.087 "data_offset": 2048, 00:15:41.087 "data_size": 63488 00:15:41.087 }, 00:15:41.087 { 00:15:41.087 "name": "BaseBdev4", 00:15:41.087 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:41.087 "is_configured": true, 00:15:41.087 "data_offset": 2048, 00:15:41.087 "data_size": 63488 00:15:41.087 } 00:15:41.087 ] 00:15:41.087 }' 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.087 16:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:41.657 [2024-12-07 16:41:40.397958] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:41.657 [2024-12-07 16:41:40.398172] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:41.657 [2024-12-07 16:41:40.398364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.918 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.918 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.918 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.918 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.918 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.918 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.918 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.918 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.918 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.918 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.918 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.918 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.918 "name": "raid_bdev1", 00:15:41.918 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:41.918 "strip_size_kb": 64, 00:15:41.918 "state": "online", 00:15:41.918 "raid_level": "raid5f", 00:15:41.918 "superblock": true, 00:15:41.918 "num_base_bdevs": 4, 00:15:41.918 "num_base_bdevs_discovered": 4, 00:15:41.918 "num_base_bdevs_operational": 4, 00:15:41.918 "base_bdevs_list": [ 00:15:41.918 { 00:15:41.918 "name": "spare", 00:15:41.918 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:41.918 "is_configured": true, 00:15:41.918 "data_offset": 2048, 00:15:41.918 "data_size": 63488 00:15:41.918 }, 00:15:41.918 { 00:15:41.918 "name": "BaseBdev2", 00:15:41.918 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:41.918 "is_configured": true, 00:15:41.918 "data_offset": 2048, 00:15:41.918 "data_size": 63488 00:15:41.918 }, 00:15:41.918 { 00:15:41.918 "name": "BaseBdev3", 00:15:41.918 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:41.918 "is_configured": true, 00:15:41.918 "data_offset": 2048, 00:15:41.918 "data_size": 63488 00:15:41.918 }, 00:15:41.918 { 00:15:41.918 "name": "BaseBdev4", 00:15:41.918 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:41.918 "is_configured": true, 00:15:41.918 "data_offset": 2048, 00:15:41.918 "data_size": 63488 00:15:41.918 } 00:15:41.918 ] 00:15:41.918 }' 00:15:41.918 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.179 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.179 "name": "raid_bdev1", 00:15:42.179 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:42.179 "strip_size_kb": 64, 00:15:42.179 "state": "online", 00:15:42.179 "raid_level": "raid5f", 00:15:42.179 "superblock": true, 00:15:42.179 "num_base_bdevs": 4, 00:15:42.179 "num_base_bdevs_discovered": 4, 00:15:42.179 "num_base_bdevs_operational": 4, 00:15:42.179 "base_bdevs_list": [ 00:15:42.179 { 00:15:42.179 "name": "spare", 00:15:42.179 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:42.179 "is_configured": true, 00:15:42.179 "data_offset": 2048, 00:15:42.179 "data_size": 63488 00:15:42.179 }, 00:15:42.179 { 00:15:42.179 "name": "BaseBdev2", 00:15:42.179 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:42.179 "is_configured": true, 00:15:42.179 "data_offset": 2048, 00:15:42.179 "data_size": 63488 00:15:42.179 }, 00:15:42.179 { 00:15:42.179 "name": "BaseBdev3", 00:15:42.179 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:42.179 "is_configured": true, 00:15:42.179 "data_offset": 2048, 00:15:42.179 "data_size": 63488 00:15:42.179 }, 00:15:42.179 { 00:15:42.179 "name": "BaseBdev4", 00:15:42.180 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:42.180 "is_configured": true, 00:15:42.180 "data_offset": 2048, 00:15:42.180 "data_size": 63488 00:15:42.180 } 00:15:42.180 ] 00:15:42.180 }' 00:15:42.180 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.180 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.180 16:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.180 "name": "raid_bdev1", 00:15:42.180 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:42.180 "strip_size_kb": 64, 00:15:42.180 "state": "online", 00:15:42.180 "raid_level": "raid5f", 00:15:42.180 "superblock": true, 00:15:42.180 "num_base_bdevs": 4, 00:15:42.180 "num_base_bdevs_discovered": 4, 00:15:42.180 "num_base_bdevs_operational": 4, 00:15:42.180 "base_bdevs_list": [ 00:15:42.180 { 00:15:42.180 "name": "spare", 00:15:42.180 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:42.180 "is_configured": true, 00:15:42.180 "data_offset": 2048, 00:15:42.180 "data_size": 63488 00:15:42.180 }, 00:15:42.180 { 00:15:42.180 "name": "BaseBdev2", 00:15:42.180 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:42.180 "is_configured": true, 00:15:42.180 "data_offset": 2048, 00:15:42.180 "data_size": 63488 00:15:42.180 }, 00:15:42.180 { 00:15:42.180 "name": "BaseBdev3", 00:15:42.180 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:42.180 "is_configured": true, 00:15:42.180 "data_offset": 2048, 00:15:42.180 "data_size": 63488 00:15:42.180 }, 00:15:42.180 { 00:15:42.180 "name": "BaseBdev4", 00:15:42.180 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:42.180 "is_configured": true, 00:15:42.180 "data_offset": 2048, 00:15:42.180 "data_size": 63488 00:15:42.180 } 00:15:42.180 ] 00:15:42.180 }' 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.180 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.750 [2024-12-07 16:41:41.414830] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:42.750 [2024-12-07 16:41:41.414869] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.750 [2024-12-07 16:41:41.414982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.750 [2024-12-07 16:41:41.415085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.750 [2024-12-07 16:41:41.415102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:42.750 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:43.010 /dev/nbd0 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.010 1+0 records in 00:15:43.010 1+0 records out 00:15:43.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330409 s, 12.4 MB/s 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:43.010 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:43.279 /dev/nbd1 00:15:43.279 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:43.279 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:43.279 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:43.279 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:43.279 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:43.279 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:43.279 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:43.279 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:43.279 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:43.279 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:43.279 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.279 1+0 records in 00:15:43.279 1+0 records out 00:15:43.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306454 s, 13.4 MB/s 00:15:43.279 16:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.279 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:43.538 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:43.538 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:43.538 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:43.538 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.538 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.538 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:43.538 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:43.538 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.538 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.538 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.797 [2024-12-07 16:41:42.587565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:43.797 [2024-12-07 16:41:42.587638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.797 [2024-12-07 16:41:42.587662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:43.797 [2024-12-07 16:41:42.587681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.797 [2024-12-07 16:41:42.590280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.797 [2024-12-07 16:41:42.590323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:43.797 [2024-12-07 16:41:42.590442] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:43.797 [2024-12-07 16:41:42.590494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.797 [2024-12-07 16:41:42.590648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.797 [2024-12-07 16:41:42.590751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.797 [2024-12-07 16:41:42.590810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:43.797 spare 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.797 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.797 [2024-12-07 16:41:42.690744] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:43.797 [2024-12-07 16:41:42.690817] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:43.797 [2024-12-07 16:41:42.691233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:15:43.797 [2024-12-07 16:41:42.691842] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:43.797 [2024-12-07 16:41:42.691883] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:43.797 [2024-12-07 16:41:42.692096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.057 "name": "raid_bdev1", 00:15:44.057 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:44.057 "strip_size_kb": 64, 00:15:44.057 "state": "online", 00:15:44.057 "raid_level": "raid5f", 00:15:44.057 "superblock": true, 00:15:44.057 "num_base_bdevs": 4, 00:15:44.057 "num_base_bdevs_discovered": 4, 00:15:44.057 "num_base_bdevs_operational": 4, 00:15:44.057 "base_bdevs_list": [ 00:15:44.057 { 00:15:44.057 "name": "spare", 00:15:44.057 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:44.057 "is_configured": true, 00:15:44.057 "data_offset": 2048, 00:15:44.057 "data_size": 63488 00:15:44.057 }, 00:15:44.057 { 00:15:44.057 "name": "BaseBdev2", 00:15:44.057 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:44.057 "is_configured": true, 00:15:44.057 "data_offset": 2048, 00:15:44.057 "data_size": 63488 00:15:44.057 }, 00:15:44.057 { 00:15:44.057 "name": "BaseBdev3", 00:15:44.057 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:44.057 "is_configured": true, 00:15:44.057 "data_offset": 2048, 00:15:44.057 "data_size": 63488 00:15:44.057 }, 00:15:44.057 { 00:15:44.057 "name": "BaseBdev4", 00:15:44.057 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:44.057 "is_configured": true, 00:15:44.057 "data_offset": 2048, 00:15:44.057 "data_size": 63488 00:15:44.057 } 00:15:44.057 ] 00:15:44.057 }' 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.057 16:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.317 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.317 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.317 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.317 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.317 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.317 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.317 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.317 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.317 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.317 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.317 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.317 "name": "raid_bdev1", 00:15:44.317 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:44.317 "strip_size_kb": 64, 00:15:44.317 "state": "online", 00:15:44.317 "raid_level": "raid5f", 00:15:44.317 "superblock": true, 00:15:44.317 "num_base_bdevs": 4, 00:15:44.317 "num_base_bdevs_discovered": 4, 00:15:44.317 "num_base_bdevs_operational": 4, 00:15:44.317 "base_bdevs_list": [ 00:15:44.317 { 00:15:44.317 "name": "spare", 00:15:44.317 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:44.317 "is_configured": true, 00:15:44.317 "data_offset": 2048, 00:15:44.317 "data_size": 63488 00:15:44.317 }, 00:15:44.317 { 00:15:44.317 "name": "BaseBdev2", 00:15:44.317 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:44.317 "is_configured": true, 00:15:44.317 "data_offset": 2048, 00:15:44.317 "data_size": 63488 00:15:44.317 }, 00:15:44.317 { 00:15:44.317 "name": "BaseBdev3", 00:15:44.317 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:44.317 "is_configured": true, 00:15:44.317 "data_offset": 2048, 00:15:44.317 "data_size": 63488 00:15:44.317 }, 00:15:44.317 { 00:15:44.317 "name": "BaseBdev4", 00:15:44.317 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:44.317 "is_configured": true, 00:15:44.317 "data_offset": 2048, 00:15:44.317 "data_size": 63488 00:15:44.317 } 00:15:44.317 ] 00:15:44.317 }' 00:15:44.317 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.577 [2024-12-07 16:41:43.363086] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.577 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.577 "name": "raid_bdev1", 00:15:44.577 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:44.577 "strip_size_kb": 64, 00:15:44.577 "state": "online", 00:15:44.577 "raid_level": "raid5f", 00:15:44.577 "superblock": true, 00:15:44.577 "num_base_bdevs": 4, 00:15:44.577 "num_base_bdevs_discovered": 3, 00:15:44.577 "num_base_bdevs_operational": 3, 00:15:44.577 "base_bdevs_list": [ 00:15:44.577 { 00:15:44.577 "name": null, 00:15:44.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.577 "is_configured": false, 00:15:44.577 "data_offset": 0, 00:15:44.577 "data_size": 63488 00:15:44.577 }, 00:15:44.577 { 00:15:44.577 "name": "BaseBdev2", 00:15:44.577 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:44.577 "is_configured": true, 00:15:44.577 "data_offset": 2048, 00:15:44.577 "data_size": 63488 00:15:44.577 }, 00:15:44.577 { 00:15:44.578 "name": "BaseBdev3", 00:15:44.578 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:44.578 "is_configured": true, 00:15:44.578 "data_offset": 2048, 00:15:44.578 "data_size": 63488 00:15:44.578 }, 00:15:44.578 { 00:15:44.578 "name": "BaseBdev4", 00:15:44.578 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:44.578 "is_configured": true, 00:15:44.578 "data_offset": 2048, 00:15:44.578 "data_size": 63488 00:15:44.578 } 00:15:44.578 ] 00:15:44.578 }' 00:15:44.578 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.578 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.148 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:45.148 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.148 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.148 [2024-12-07 16:41:43.814332] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:45.148 [2024-12-07 16:41:43.814698] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:45.148 [2024-12-07 16:41:43.814763] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:45.148 [2024-12-07 16:41:43.814845] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:45.148 [2024-12-07 16:41:43.820728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:15:45.148 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.148 16:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:45.148 [2024-12-07 16:41:43.823420] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.089 "name": "raid_bdev1", 00:15:46.089 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:46.089 "strip_size_kb": 64, 00:15:46.089 "state": "online", 00:15:46.089 "raid_level": "raid5f", 00:15:46.089 "superblock": true, 00:15:46.089 "num_base_bdevs": 4, 00:15:46.089 "num_base_bdevs_discovered": 4, 00:15:46.089 "num_base_bdevs_operational": 4, 00:15:46.089 "process": { 00:15:46.089 "type": "rebuild", 00:15:46.089 "target": "spare", 00:15:46.089 "progress": { 00:15:46.089 "blocks": 19200, 00:15:46.089 "percent": 10 00:15:46.089 } 00:15:46.089 }, 00:15:46.089 "base_bdevs_list": [ 00:15:46.089 { 00:15:46.089 "name": "spare", 00:15:46.089 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:46.089 "is_configured": true, 00:15:46.089 "data_offset": 2048, 00:15:46.089 "data_size": 63488 00:15:46.089 }, 00:15:46.089 { 00:15:46.089 "name": "BaseBdev2", 00:15:46.089 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:46.089 "is_configured": true, 00:15:46.089 "data_offset": 2048, 00:15:46.089 "data_size": 63488 00:15:46.089 }, 00:15:46.089 { 00:15:46.089 "name": "BaseBdev3", 00:15:46.089 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:46.089 "is_configured": true, 00:15:46.089 "data_offset": 2048, 00:15:46.089 "data_size": 63488 00:15:46.089 }, 00:15:46.089 { 00:15:46.089 "name": "BaseBdev4", 00:15:46.089 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:46.089 "is_configured": true, 00:15:46.089 "data_offset": 2048, 00:15:46.089 "data_size": 63488 00:15:46.089 } 00:15:46.089 ] 00:15:46.089 }' 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.089 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.350 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.350 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:46.350 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.350 16:41:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.351 [2024-12-07 16:41:44.991742] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.351 [2024-12-07 16:41:45.034311] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:46.351 [2024-12-07 16:41:45.034425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.351 [2024-12-07 16:41:45.034448] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.351 [2024-12-07 16:41:45.034456] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.351 "name": "raid_bdev1", 00:15:46.351 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:46.351 "strip_size_kb": 64, 00:15:46.351 "state": "online", 00:15:46.351 "raid_level": "raid5f", 00:15:46.351 "superblock": true, 00:15:46.351 "num_base_bdevs": 4, 00:15:46.351 "num_base_bdevs_discovered": 3, 00:15:46.351 "num_base_bdevs_operational": 3, 00:15:46.351 "base_bdevs_list": [ 00:15:46.351 { 00:15:46.351 "name": null, 00:15:46.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.351 "is_configured": false, 00:15:46.351 "data_offset": 0, 00:15:46.351 "data_size": 63488 00:15:46.351 }, 00:15:46.351 { 00:15:46.351 "name": "BaseBdev2", 00:15:46.351 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:46.351 "is_configured": true, 00:15:46.351 "data_offset": 2048, 00:15:46.351 "data_size": 63488 00:15:46.351 }, 00:15:46.351 { 00:15:46.351 "name": "BaseBdev3", 00:15:46.351 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:46.351 "is_configured": true, 00:15:46.351 "data_offset": 2048, 00:15:46.351 "data_size": 63488 00:15:46.351 }, 00:15:46.351 { 00:15:46.351 "name": "BaseBdev4", 00:15:46.351 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:46.351 "is_configured": true, 00:15:46.351 "data_offset": 2048, 00:15:46.351 "data_size": 63488 00:15:46.351 } 00:15:46.351 ] 00:15:46.351 }' 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.351 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.610 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:46.610 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.610 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.610 [2024-12-07 16:41:45.466681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:46.610 [2024-12-07 16:41:45.466841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.610 [2024-12-07 16:41:45.466905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:46.610 [2024-12-07 16:41:45.466938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.610 [2024-12-07 16:41:45.467546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.610 [2024-12-07 16:41:45.467613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:46.610 [2024-12-07 16:41:45.467753] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:46.610 [2024-12-07 16:41:45.467794] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:46.610 [2024-12-07 16:41:45.467845] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:46.610 [2024-12-07 16:41:45.467930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.610 [2024-12-07 16:41:45.473711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:15:46.610 spare 00:15:46.610 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.610 16:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:46.610 [2024-12-07 16:41:45.476331] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.011 "name": "raid_bdev1", 00:15:48.011 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:48.011 "strip_size_kb": 64, 00:15:48.011 "state": "online", 00:15:48.011 "raid_level": "raid5f", 00:15:48.011 "superblock": true, 00:15:48.011 "num_base_bdevs": 4, 00:15:48.011 "num_base_bdevs_discovered": 4, 00:15:48.011 "num_base_bdevs_operational": 4, 00:15:48.011 "process": { 00:15:48.011 "type": "rebuild", 00:15:48.011 "target": "spare", 00:15:48.011 "progress": { 00:15:48.011 "blocks": 19200, 00:15:48.011 "percent": 10 00:15:48.011 } 00:15:48.011 }, 00:15:48.011 "base_bdevs_list": [ 00:15:48.011 { 00:15:48.011 "name": "spare", 00:15:48.011 "uuid": "c03295cf-54d8-58bc-a967-2b99b7dc00b2", 00:15:48.011 "is_configured": true, 00:15:48.011 "data_offset": 2048, 00:15:48.011 "data_size": 63488 00:15:48.011 }, 00:15:48.011 { 00:15:48.011 "name": "BaseBdev2", 00:15:48.011 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:48.011 "is_configured": true, 00:15:48.011 "data_offset": 2048, 00:15:48.011 "data_size": 63488 00:15:48.011 }, 00:15:48.011 { 00:15:48.011 "name": "BaseBdev3", 00:15:48.011 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:48.011 "is_configured": true, 00:15:48.011 "data_offset": 2048, 00:15:48.011 "data_size": 63488 00:15:48.011 }, 00:15:48.011 { 00:15:48.011 "name": "BaseBdev4", 00:15:48.011 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:48.011 "is_configured": true, 00:15:48.011 "data_offset": 2048, 00:15:48.011 "data_size": 63488 00:15:48.011 } 00:15:48.011 ] 00:15:48.011 }' 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.011 [2024-12-07 16:41:46.620600] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.011 [2024-12-07 16:41:46.687845] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:48.011 [2024-12-07 16:41:46.688071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.011 [2024-12-07 16:41:46.688118] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.011 [2024-12-07 16:41:46.688144] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.011 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.011 "name": "raid_bdev1", 00:15:48.011 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:48.012 "strip_size_kb": 64, 00:15:48.012 "state": "online", 00:15:48.012 "raid_level": "raid5f", 00:15:48.012 "superblock": true, 00:15:48.012 "num_base_bdevs": 4, 00:15:48.012 "num_base_bdevs_discovered": 3, 00:15:48.012 "num_base_bdevs_operational": 3, 00:15:48.012 "base_bdevs_list": [ 00:15:48.012 { 00:15:48.012 "name": null, 00:15:48.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.012 "is_configured": false, 00:15:48.012 "data_offset": 0, 00:15:48.012 "data_size": 63488 00:15:48.012 }, 00:15:48.012 { 00:15:48.012 "name": "BaseBdev2", 00:15:48.012 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:48.012 "is_configured": true, 00:15:48.012 "data_offset": 2048, 00:15:48.012 "data_size": 63488 00:15:48.012 }, 00:15:48.012 { 00:15:48.012 "name": "BaseBdev3", 00:15:48.012 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:48.012 "is_configured": true, 00:15:48.012 "data_offset": 2048, 00:15:48.012 "data_size": 63488 00:15:48.012 }, 00:15:48.012 { 00:15:48.012 "name": "BaseBdev4", 00:15:48.012 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:48.012 "is_configured": true, 00:15:48.012 "data_offset": 2048, 00:15:48.012 "data_size": 63488 00:15:48.012 } 00:15:48.012 ] 00:15:48.012 }' 00:15:48.012 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.012 16:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.581 "name": "raid_bdev1", 00:15:48.581 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:48.581 "strip_size_kb": 64, 00:15:48.581 "state": "online", 00:15:48.581 "raid_level": "raid5f", 00:15:48.581 "superblock": true, 00:15:48.581 "num_base_bdevs": 4, 00:15:48.581 "num_base_bdevs_discovered": 3, 00:15:48.581 "num_base_bdevs_operational": 3, 00:15:48.581 "base_bdevs_list": [ 00:15:48.581 { 00:15:48.581 "name": null, 00:15:48.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.581 "is_configured": false, 00:15:48.581 "data_offset": 0, 00:15:48.581 "data_size": 63488 00:15:48.581 }, 00:15:48.581 { 00:15:48.581 "name": "BaseBdev2", 00:15:48.581 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:48.581 "is_configured": true, 00:15:48.581 "data_offset": 2048, 00:15:48.581 "data_size": 63488 00:15:48.581 }, 00:15:48.581 { 00:15:48.581 "name": "BaseBdev3", 00:15:48.581 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:48.581 "is_configured": true, 00:15:48.581 "data_offset": 2048, 00:15:48.581 "data_size": 63488 00:15:48.581 }, 00:15:48.581 { 00:15:48.581 "name": "BaseBdev4", 00:15:48.581 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:48.581 "is_configured": true, 00:15:48.581 "data_offset": 2048, 00:15:48.581 "data_size": 63488 00:15:48.581 } 00:15:48.581 ] 00:15:48.581 }' 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.581 [2024-12-07 16:41:47.332218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:48.581 [2024-12-07 16:41:47.332309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.581 [2024-12-07 16:41:47.332335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:48.581 [2024-12-07 16:41:47.332363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.581 [2024-12-07 16:41:47.332906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.581 [2024-12-07 16:41:47.332936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:48.581 [2024-12-07 16:41:47.333028] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:48.581 [2024-12-07 16:41:47.333054] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:48.581 [2024-12-07 16:41:47.333063] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:48.581 [2024-12-07 16:41:47.333080] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:48.581 BaseBdev1 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.581 16:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:49.518 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:49.518 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.518 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.518 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.518 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.518 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.518 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.518 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.518 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.518 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.518 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.518 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.519 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.519 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.519 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.519 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.519 "name": "raid_bdev1", 00:15:49.519 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:49.519 "strip_size_kb": 64, 00:15:49.519 "state": "online", 00:15:49.519 "raid_level": "raid5f", 00:15:49.519 "superblock": true, 00:15:49.519 "num_base_bdevs": 4, 00:15:49.519 "num_base_bdevs_discovered": 3, 00:15:49.519 "num_base_bdevs_operational": 3, 00:15:49.519 "base_bdevs_list": [ 00:15:49.519 { 00:15:49.519 "name": null, 00:15:49.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.519 "is_configured": false, 00:15:49.519 "data_offset": 0, 00:15:49.519 "data_size": 63488 00:15:49.519 }, 00:15:49.519 { 00:15:49.519 "name": "BaseBdev2", 00:15:49.519 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:49.519 "is_configured": true, 00:15:49.519 "data_offset": 2048, 00:15:49.519 "data_size": 63488 00:15:49.519 }, 00:15:49.519 { 00:15:49.519 "name": "BaseBdev3", 00:15:49.519 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:49.519 "is_configured": true, 00:15:49.519 "data_offset": 2048, 00:15:49.519 "data_size": 63488 00:15:49.519 }, 00:15:49.519 { 00:15:49.519 "name": "BaseBdev4", 00:15:49.519 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:49.519 "is_configured": true, 00:15:49.519 "data_offset": 2048, 00:15:49.519 "data_size": 63488 00:15:49.519 } 00:15:49.519 ] 00:15:49.519 }' 00:15:49.519 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.519 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.087 "name": "raid_bdev1", 00:15:50.087 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:50.087 "strip_size_kb": 64, 00:15:50.087 "state": "online", 00:15:50.087 "raid_level": "raid5f", 00:15:50.087 "superblock": true, 00:15:50.087 "num_base_bdevs": 4, 00:15:50.087 "num_base_bdevs_discovered": 3, 00:15:50.087 "num_base_bdevs_operational": 3, 00:15:50.087 "base_bdevs_list": [ 00:15:50.087 { 00:15:50.087 "name": null, 00:15:50.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.087 "is_configured": false, 00:15:50.087 "data_offset": 0, 00:15:50.087 "data_size": 63488 00:15:50.087 }, 00:15:50.087 { 00:15:50.087 "name": "BaseBdev2", 00:15:50.087 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:50.087 "is_configured": true, 00:15:50.087 "data_offset": 2048, 00:15:50.087 "data_size": 63488 00:15:50.087 }, 00:15:50.087 { 00:15:50.087 "name": "BaseBdev3", 00:15:50.087 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:50.087 "is_configured": true, 00:15:50.087 "data_offset": 2048, 00:15:50.087 "data_size": 63488 00:15:50.087 }, 00:15:50.087 { 00:15:50.087 "name": "BaseBdev4", 00:15:50.087 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:50.087 "is_configured": true, 00:15:50.087 "data_offset": 2048, 00:15:50.087 "data_size": 63488 00:15:50.087 } 00:15:50.087 ] 00:15:50.087 }' 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.087 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.088 [2024-12-07 16:41:48.897606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.088 [2024-12-07 16:41:48.897808] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:50.088 [2024-12-07 16:41:48.897826] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:50.088 request: 00:15:50.088 { 00:15:50.088 "base_bdev": "BaseBdev1", 00:15:50.088 "raid_bdev": "raid_bdev1", 00:15:50.088 "method": "bdev_raid_add_base_bdev", 00:15:50.088 "req_id": 1 00:15:50.088 } 00:15:50.088 Got JSON-RPC error response 00:15:50.088 response: 00:15:50.088 { 00:15:50.088 "code": -22, 00:15:50.088 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:50.088 } 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:50.088 16:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:51.026 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:51.026 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.026 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.026 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.026 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.026 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.027 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.027 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.027 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.027 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.027 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.027 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.027 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.027 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.287 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.287 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.287 "name": "raid_bdev1", 00:15:51.287 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:51.287 "strip_size_kb": 64, 00:15:51.287 "state": "online", 00:15:51.287 "raid_level": "raid5f", 00:15:51.287 "superblock": true, 00:15:51.287 "num_base_bdevs": 4, 00:15:51.287 "num_base_bdevs_discovered": 3, 00:15:51.287 "num_base_bdevs_operational": 3, 00:15:51.287 "base_bdevs_list": [ 00:15:51.287 { 00:15:51.287 "name": null, 00:15:51.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.287 "is_configured": false, 00:15:51.287 "data_offset": 0, 00:15:51.287 "data_size": 63488 00:15:51.287 }, 00:15:51.287 { 00:15:51.287 "name": "BaseBdev2", 00:15:51.287 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:51.287 "is_configured": true, 00:15:51.287 "data_offset": 2048, 00:15:51.287 "data_size": 63488 00:15:51.287 }, 00:15:51.287 { 00:15:51.287 "name": "BaseBdev3", 00:15:51.287 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:51.287 "is_configured": true, 00:15:51.287 "data_offset": 2048, 00:15:51.287 "data_size": 63488 00:15:51.287 }, 00:15:51.287 { 00:15:51.287 "name": "BaseBdev4", 00:15:51.287 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:51.287 "is_configured": true, 00:15:51.287 "data_offset": 2048, 00:15:51.287 "data_size": 63488 00:15:51.287 } 00:15:51.287 ] 00:15:51.287 }' 00:15:51.287 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.287 16:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.547 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.547 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.547 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.547 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.547 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.547 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.547 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.547 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.547 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.547 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.547 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.547 "name": "raid_bdev1", 00:15:51.547 "uuid": "04e14922-50a5-4717-8ae0-a59f54aee54c", 00:15:51.547 "strip_size_kb": 64, 00:15:51.547 "state": "online", 00:15:51.547 "raid_level": "raid5f", 00:15:51.547 "superblock": true, 00:15:51.547 "num_base_bdevs": 4, 00:15:51.547 "num_base_bdevs_discovered": 3, 00:15:51.547 "num_base_bdevs_operational": 3, 00:15:51.547 "base_bdevs_list": [ 00:15:51.547 { 00:15:51.547 "name": null, 00:15:51.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.547 "is_configured": false, 00:15:51.547 "data_offset": 0, 00:15:51.547 "data_size": 63488 00:15:51.547 }, 00:15:51.547 { 00:15:51.547 "name": "BaseBdev2", 00:15:51.547 "uuid": "e34ca006-4a0c-5aaa-a38b-af8d01e5253a", 00:15:51.547 "is_configured": true, 00:15:51.547 "data_offset": 2048, 00:15:51.547 "data_size": 63488 00:15:51.547 }, 00:15:51.547 { 00:15:51.547 "name": "BaseBdev3", 00:15:51.547 "uuid": "6c7817ac-fec9-5089-9bb0-aea45d9b71c7", 00:15:51.548 "is_configured": true, 00:15:51.548 "data_offset": 2048, 00:15:51.548 "data_size": 63488 00:15:51.548 }, 00:15:51.548 { 00:15:51.548 "name": "BaseBdev4", 00:15:51.548 "uuid": "4ffead88-3db1-53fa-8539-d11b23ce3bc8", 00:15:51.548 "is_configured": true, 00:15:51.548 "data_offset": 2048, 00:15:51.548 "data_size": 63488 00:15:51.548 } 00:15:51.548 ] 00:15:51.548 }' 00:15:51.548 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.807 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.807 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.807 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.807 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95854 00:15:51.807 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95854 ']' 00:15:51.807 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95854 00:15:51.807 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:51.807 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.807 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95854 00:15:51.807 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.808 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.808 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95854' 00:15:51.808 killing process with pid 95854 00:15:51.808 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95854 00:15:51.808 Received shutdown signal, test time was about 60.000000 seconds 00:15:51.808 00:15:51.808 Latency(us) 00:15:51.808 [2024-12-07T16:41:50.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.808 [2024-12-07T16:41:50.707Z] =================================================================================================================== 00:15:51.808 [2024-12-07T16:41:50.707Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:51.808 [2024-12-07 16:41:50.535765] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:51.808 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95854 00:15:51.808 [2024-12-07 16:41:50.535916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.808 [2024-12-07 16:41:50.536006] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.808 [2024-12-07 16:41:50.536017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:51.808 [2024-12-07 16:41:50.632102] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.378 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:52.378 00:15:52.378 real 0m25.770s 00:15:52.378 user 0m32.543s 00:15:52.378 sys 0m3.344s 00:15:52.378 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.378 16:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.378 ************************************ 00:15:52.378 END TEST raid5f_rebuild_test_sb 00:15:52.378 ************************************ 00:15:52.378 16:41:51 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:52.378 16:41:51 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:52.378 16:41:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:52.378 16:41:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.378 16:41:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.378 ************************************ 00:15:52.378 START TEST raid_state_function_test_sb_4k 00:15:52.378 ************************************ 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:52.378 Process raid pid: 96658 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96658 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96658' 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96658 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96658 ']' 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.378 16:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.378 [2024-12-07 16:41:51.159095] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:52.378 [2024-12-07 16:41:51.159357] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.638 [2024-12-07 16:41:51.323005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.638 [2024-12-07 16:41:51.402378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.638 [2024-12-07 16:41:51.480119] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.638 [2024-12-07 16:41:51.480261] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.208 [2024-12-07 16:41:52.016606] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:53.208 [2024-12-07 16:41:52.016742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:53.208 [2024-12-07 16:41:52.016760] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:53.208 [2024-12-07 16:41:52.016781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.208 "name": "Existed_Raid", 00:15:53.208 "uuid": "52638861-28f8-4ceb-9cd5-f90fa2c2a80c", 00:15:53.208 "strip_size_kb": 0, 00:15:53.208 "state": "configuring", 00:15:53.208 "raid_level": "raid1", 00:15:53.208 "superblock": true, 00:15:53.208 "num_base_bdevs": 2, 00:15:53.208 "num_base_bdevs_discovered": 0, 00:15:53.208 "num_base_bdevs_operational": 2, 00:15:53.208 "base_bdevs_list": [ 00:15:53.208 { 00:15:53.208 "name": "BaseBdev1", 00:15:53.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.208 "is_configured": false, 00:15:53.208 "data_offset": 0, 00:15:53.208 "data_size": 0 00:15:53.208 }, 00:15:53.208 { 00:15:53.208 "name": "BaseBdev2", 00:15:53.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.208 "is_configured": false, 00:15:53.208 "data_offset": 0, 00:15:53.208 "data_size": 0 00:15:53.208 } 00:15:53.208 ] 00:15:53.208 }' 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.208 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.778 [2024-12-07 16:41:52.507658] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:53.778 [2024-12-07 16:41:52.507789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.778 [2024-12-07 16:41:52.519683] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:53.778 [2024-12-07 16:41:52.519782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:53.778 [2024-12-07 16:41:52.519813] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:53.778 [2024-12-07 16:41:52.519838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.778 [2024-12-07 16:41:52.547175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.778 BaseBdev1 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.778 [ 00:15:53.778 { 00:15:53.778 "name": "BaseBdev1", 00:15:53.778 "aliases": [ 00:15:53.778 "3676d104-c3d7-4664-ab3f-c5186a04e828" 00:15:53.778 ], 00:15:53.778 "product_name": "Malloc disk", 00:15:53.778 "block_size": 4096, 00:15:53.778 "num_blocks": 8192, 00:15:53.778 "uuid": "3676d104-c3d7-4664-ab3f-c5186a04e828", 00:15:53.778 "assigned_rate_limits": { 00:15:53.778 "rw_ios_per_sec": 0, 00:15:53.778 "rw_mbytes_per_sec": 0, 00:15:53.778 "r_mbytes_per_sec": 0, 00:15:53.778 "w_mbytes_per_sec": 0 00:15:53.778 }, 00:15:53.778 "claimed": true, 00:15:53.778 "claim_type": "exclusive_write", 00:15:53.778 "zoned": false, 00:15:53.778 "supported_io_types": { 00:15:53.778 "read": true, 00:15:53.778 "write": true, 00:15:53.778 "unmap": true, 00:15:53.778 "flush": true, 00:15:53.778 "reset": true, 00:15:53.778 "nvme_admin": false, 00:15:53.778 "nvme_io": false, 00:15:53.778 "nvme_io_md": false, 00:15:53.778 "write_zeroes": true, 00:15:53.778 "zcopy": true, 00:15:53.778 "get_zone_info": false, 00:15:53.778 "zone_management": false, 00:15:53.778 "zone_append": false, 00:15:53.778 "compare": false, 00:15:53.778 "compare_and_write": false, 00:15:53.778 "abort": true, 00:15:53.778 "seek_hole": false, 00:15:53.778 "seek_data": false, 00:15:53.778 "copy": true, 00:15:53.778 "nvme_iov_md": false 00:15:53.778 }, 00:15:53.778 "memory_domains": [ 00:15:53.778 { 00:15:53.778 "dma_device_id": "system", 00:15:53.778 "dma_device_type": 1 00:15:53.778 }, 00:15:53.778 { 00:15:53.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.778 "dma_device_type": 2 00:15:53.778 } 00:15:53.778 ], 00:15:53.778 "driver_specific": {} 00:15:53.778 } 00:15:53.778 ] 00:15:53.778 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.779 "name": "Existed_Raid", 00:15:53.779 "uuid": "2ac62744-005a-4b2e-b7da-84ed6939cd68", 00:15:53.779 "strip_size_kb": 0, 00:15:53.779 "state": "configuring", 00:15:53.779 "raid_level": "raid1", 00:15:53.779 "superblock": true, 00:15:53.779 "num_base_bdevs": 2, 00:15:53.779 "num_base_bdevs_discovered": 1, 00:15:53.779 "num_base_bdevs_operational": 2, 00:15:53.779 "base_bdevs_list": [ 00:15:53.779 { 00:15:53.779 "name": "BaseBdev1", 00:15:53.779 "uuid": "3676d104-c3d7-4664-ab3f-c5186a04e828", 00:15:53.779 "is_configured": true, 00:15:53.779 "data_offset": 256, 00:15:53.779 "data_size": 7936 00:15:53.779 }, 00:15:53.779 { 00:15:53.779 "name": "BaseBdev2", 00:15:53.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.779 "is_configured": false, 00:15:53.779 "data_offset": 0, 00:15:53.779 "data_size": 0 00:15:53.779 } 00:15:53.779 ] 00:15:53.779 }' 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.779 16:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.348 [2024-12-07 16:41:53.042462] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.348 [2024-12-07 16:41:53.042544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.348 [2024-12-07 16:41:53.054428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.348 [2024-12-07 16:41:53.056739] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.348 [2024-12-07 16:41:53.056836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.348 "name": "Existed_Raid", 00:15:54.348 "uuid": "749aa188-5887-40fb-8eca-a9bdc18b7bd5", 00:15:54.348 "strip_size_kb": 0, 00:15:54.348 "state": "configuring", 00:15:54.348 "raid_level": "raid1", 00:15:54.348 "superblock": true, 00:15:54.348 "num_base_bdevs": 2, 00:15:54.348 "num_base_bdevs_discovered": 1, 00:15:54.348 "num_base_bdevs_operational": 2, 00:15:54.348 "base_bdevs_list": [ 00:15:54.348 { 00:15:54.348 "name": "BaseBdev1", 00:15:54.348 "uuid": "3676d104-c3d7-4664-ab3f-c5186a04e828", 00:15:54.348 "is_configured": true, 00:15:54.348 "data_offset": 256, 00:15:54.348 "data_size": 7936 00:15:54.348 }, 00:15:54.348 { 00:15:54.348 "name": "BaseBdev2", 00:15:54.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.348 "is_configured": false, 00:15:54.348 "data_offset": 0, 00:15:54.348 "data_size": 0 00:15:54.348 } 00:15:54.348 ] 00:15:54.348 }' 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.348 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.917 BaseBdev2 00:15:54.917 [2024-12-07 16:41:53.569677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.917 [2024-12-07 16:41:53.569953] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:54.917 [2024-12-07 16:41:53.569979] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:54.917 [2024-12-07 16:41:53.570308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:54.917 [2024-12-07 16:41:53.570505] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:54.917 [2024-12-07 16:41:53.570530] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:54.917 [2024-12-07 16:41:53.570682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.917 [ 00:15:54.917 { 00:15:54.917 "name": "BaseBdev2", 00:15:54.917 "aliases": [ 00:15:54.917 "5273af33-52ec-4c17-af10-cf6b52505ab1" 00:15:54.917 ], 00:15:54.917 "product_name": "Malloc disk", 00:15:54.917 "block_size": 4096, 00:15:54.917 "num_blocks": 8192, 00:15:54.917 "uuid": "5273af33-52ec-4c17-af10-cf6b52505ab1", 00:15:54.917 "assigned_rate_limits": { 00:15:54.917 "rw_ios_per_sec": 0, 00:15:54.917 "rw_mbytes_per_sec": 0, 00:15:54.917 "r_mbytes_per_sec": 0, 00:15:54.917 "w_mbytes_per_sec": 0 00:15:54.917 }, 00:15:54.917 "claimed": true, 00:15:54.917 "claim_type": "exclusive_write", 00:15:54.917 "zoned": false, 00:15:54.917 "supported_io_types": { 00:15:54.917 "read": true, 00:15:54.917 "write": true, 00:15:54.917 "unmap": true, 00:15:54.917 "flush": true, 00:15:54.917 "reset": true, 00:15:54.917 "nvme_admin": false, 00:15:54.917 "nvme_io": false, 00:15:54.917 "nvme_io_md": false, 00:15:54.917 "write_zeroes": true, 00:15:54.917 "zcopy": true, 00:15:54.917 "get_zone_info": false, 00:15:54.917 "zone_management": false, 00:15:54.917 "zone_append": false, 00:15:54.917 "compare": false, 00:15:54.917 "compare_and_write": false, 00:15:54.917 "abort": true, 00:15:54.917 "seek_hole": false, 00:15:54.917 "seek_data": false, 00:15:54.917 "copy": true, 00:15:54.917 "nvme_iov_md": false 00:15:54.917 }, 00:15:54.917 "memory_domains": [ 00:15:54.917 { 00:15:54.917 "dma_device_id": "system", 00:15:54.917 "dma_device_type": 1 00:15:54.917 }, 00:15:54.917 { 00:15:54.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.917 "dma_device_type": 2 00:15:54.917 } 00:15:54.917 ], 00:15:54.917 "driver_specific": {} 00:15:54.917 } 00:15:54.917 ] 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.917 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.917 "name": "Existed_Raid", 00:15:54.917 "uuid": "749aa188-5887-40fb-8eca-a9bdc18b7bd5", 00:15:54.918 "strip_size_kb": 0, 00:15:54.918 "state": "online", 00:15:54.918 "raid_level": "raid1", 00:15:54.918 "superblock": true, 00:15:54.918 "num_base_bdevs": 2, 00:15:54.918 "num_base_bdevs_discovered": 2, 00:15:54.918 "num_base_bdevs_operational": 2, 00:15:54.918 "base_bdevs_list": [ 00:15:54.918 { 00:15:54.918 "name": "BaseBdev1", 00:15:54.918 "uuid": "3676d104-c3d7-4664-ab3f-c5186a04e828", 00:15:54.918 "is_configured": true, 00:15:54.918 "data_offset": 256, 00:15:54.918 "data_size": 7936 00:15:54.918 }, 00:15:54.918 { 00:15:54.918 "name": "BaseBdev2", 00:15:54.918 "uuid": "5273af33-52ec-4c17-af10-cf6b52505ab1", 00:15:54.918 "is_configured": true, 00:15:54.918 "data_offset": 256, 00:15:54.918 "data_size": 7936 00:15:54.918 } 00:15:54.918 ] 00:15:54.918 }' 00:15:54.918 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.918 16:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.177 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:55.177 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:55.177 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:55.177 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:55.177 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:55.177 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:55.177 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:55.177 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:55.178 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.178 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.178 [2024-12-07 16:41:54.049256] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.178 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:55.437 "name": "Existed_Raid", 00:15:55.437 "aliases": [ 00:15:55.437 "749aa188-5887-40fb-8eca-a9bdc18b7bd5" 00:15:55.437 ], 00:15:55.437 "product_name": "Raid Volume", 00:15:55.437 "block_size": 4096, 00:15:55.437 "num_blocks": 7936, 00:15:55.437 "uuid": "749aa188-5887-40fb-8eca-a9bdc18b7bd5", 00:15:55.437 "assigned_rate_limits": { 00:15:55.437 "rw_ios_per_sec": 0, 00:15:55.437 "rw_mbytes_per_sec": 0, 00:15:55.437 "r_mbytes_per_sec": 0, 00:15:55.437 "w_mbytes_per_sec": 0 00:15:55.437 }, 00:15:55.437 "claimed": false, 00:15:55.437 "zoned": false, 00:15:55.437 "supported_io_types": { 00:15:55.437 "read": true, 00:15:55.437 "write": true, 00:15:55.437 "unmap": false, 00:15:55.437 "flush": false, 00:15:55.437 "reset": true, 00:15:55.437 "nvme_admin": false, 00:15:55.437 "nvme_io": false, 00:15:55.437 "nvme_io_md": false, 00:15:55.437 "write_zeroes": true, 00:15:55.437 "zcopy": false, 00:15:55.437 "get_zone_info": false, 00:15:55.437 "zone_management": false, 00:15:55.437 "zone_append": false, 00:15:55.437 "compare": false, 00:15:55.437 "compare_and_write": false, 00:15:55.437 "abort": false, 00:15:55.437 "seek_hole": false, 00:15:55.437 "seek_data": false, 00:15:55.437 "copy": false, 00:15:55.437 "nvme_iov_md": false 00:15:55.437 }, 00:15:55.437 "memory_domains": [ 00:15:55.437 { 00:15:55.437 "dma_device_id": "system", 00:15:55.437 "dma_device_type": 1 00:15:55.437 }, 00:15:55.437 { 00:15:55.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.437 "dma_device_type": 2 00:15:55.437 }, 00:15:55.437 { 00:15:55.437 "dma_device_id": "system", 00:15:55.437 "dma_device_type": 1 00:15:55.437 }, 00:15:55.437 { 00:15:55.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.437 "dma_device_type": 2 00:15:55.437 } 00:15:55.437 ], 00:15:55.437 "driver_specific": { 00:15:55.437 "raid": { 00:15:55.437 "uuid": "749aa188-5887-40fb-8eca-a9bdc18b7bd5", 00:15:55.437 "strip_size_kb": 0, 00:15:55.437 "state": "online", 00:15:55.437 "raid_level": "raid1", 00:15:55.437 "superblock": true, 00:15:55.437 "num_base_bdevs": 2, 00:15:55.437 "num_base_bdevs_discovered": 2, 00:15:55.437 "num_base_bdevs_operational": 2, 00:15:55.437 "base_bdevs_list": [ 00:15:55.437 { 00:15:55.437 "name": "BaseBdev1", 00:15:55.437 "uuid": "3676d104-c3d7-4664-ab3f-c5186a04e828", 00:15:55.437 "is_configured": true, 00:15:55.437 "data_offset": 256, 00:15:55.437 "data_size": 7936 00:15:55.437 }, 00:15:55.437 { 00:15:55.437 "name": "BaseBdev2", 00:15:55.437 "uuid": "5273af33-52ec-4c17-af10-cf6b52505ab1", 00:15:55.437 "is_configured": true, 00:15:55.437 "data_offset": 256, 00:15:55.437 "data_size": 7936 00:15:55.437 } 00:15:55.437 ] 00:15:55.437 } 00:15:55.437 } 00:15:55.437 }' 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:55.437 BaseBdev2' 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.437 [2024-12-07 16:41:54.296597] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:55.437 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.438 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.438 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.438 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.438 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.438 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.438 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.438 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.697 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.697 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.697 "name": "Existed_Raid", 00:15:55.697 "uuid": "749aa188-5887-40fb-8eca-a9bdc18b7bd5", 00:15:55.697 "strip_size_kb": 0, 00:15:55.697 "state": "online", 00:15:55.697 "raid_level": "raid1", 00:15:55.697 "superblock": true, 00:15:55.697 "num_base_bdevs": 2, 00:15:55.697 "num_base_bdevs_discovered": 1, 00:15:55.697 "num_base_bdevs_operational": 1, 00:15:55.697 "base_bdevs_list": [ 00:15:55.697 { 00:15:55.697 "name": null, 00:15:55.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.697 "is_configured": false, 00:15:55.697 "data_offset": 0, 00:15:55.697 "data_size": 7936 00:15:55.697 }, 00:15:55.697 { 00:15:55.697 "name": "BaseBdev2", 00:15:55.697 "uuid": "5273af33-52ec-4c17-af10-cf6b52505ab1", 00:15:55.697 "is_configured": true, 00:15:55.697 "data_offset": 256, 00:15:55.697 "data_size": 7936 00:15:55.697 } 00:15:55.697 ] 00:15:55.697 }' 00:15:55.697 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.697 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.004 [2024-12-07 16:41:54.852971] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:56.004 [2024-12-07 16:41:54.853106] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.004 [2024-12-07 16:41:54.874240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.004 [2024-12-07 16:41:54.874308] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.004 [2024-12-07 16:41:54.874330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.004 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96658 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96658 ']' 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96658 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96658 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96658' 00:15:56.264 killing process with pid 96658 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96658 00:15:56.264 [2024-12-07 16:41:54.974310] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.264 16:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96658 00:15:56.264 [2024-12-07 16:41:54.975971] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.525 16:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:56.525 00:15:56.525 real 0m4.289s 00:15:56.525 user 0m6.530s 00:15:56.525 sys 0m1.011s 00:15:56.525 16:41:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.525 16:41:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.525 ************************************ 00:15:56.525 END TEST raid_state_function_test_sb_4k 00:15:56.525 ************************************ 00:15:56.525 16:41:55 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:56.525 16:41:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:56.525 16:41:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.525 16:41:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:56.785 ************************************ 00:15:56.785 START TEST raid_superblock_test_4k 00:15:56.785 ************************************ 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96899 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96899 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96899 ']' 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.785 16:41:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.785 [2024-12-07 16:41:55.524184] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:56.785 [2024-12-07 16:41:55.524448] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96899 ] 00:15:57.044 [2024-12-07 16:41:55.690124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.044 [2024-12-07 16:41:55.771264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.044 [2024-12-07 16:41:55.848719] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.044 [2024-12-07 16:41:55.848894] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.612 malloc1 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.612 [2024-12-07 16:41:56.404144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:57.612 [2024-12-07 16:41:56.404276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.612 [2024-12-07 16:41:56.404320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:57.612 [2024-12-07 16:41:56.404383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.612 [2024-12-07 16:41:56.406922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.612 [2024-12-07 16:41:56.406993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:57.612 pt1 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.612 malloc2 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.612 [2024-12-07 16:41:56.458423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.612 [2024-12-07 16:41:56.458508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.612 [2024-12-07 16:41:56.458528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:57.612 [2024-12-07 16:41:56.458541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.612 [2024-12-07 16:41:56.461125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.612 [2024-12-07 16:41:56.461227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.612 pt2 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:57.612 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.613 [2024-12-07 16:41:56.470450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:57.613 [2024-12-07 16:41:56.472710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.613 [2024-12-07 16:41:56.472885] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:57.613 [2024-12-07 16:41:56.472900] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:57.613 [2024-12-07 16:41:56.473203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:57.613 [2024-12-07 16:41:56.473389] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:57.613 [2024-12-07 16:41:56.473400] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:57.613 [2024-12-07 16:41:56.473562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.613 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.872 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.872 "name": "raid_bdev1", 00:15:57.872 "uuid": "77c03e10-34f0-44aa-95a8-8c6ce6270795", 00:15:57.872 "strip_size_kb": 0, 00:15:57.872 "state": "online", 00:15:57.872 "raid_level": "raid1", 00:15:57.872 "superblock": true, 00:15:57.872 "num_base_bdevs": 2, 00:15:57.872 "num_base_bdevs_discovered": 2, 00:15:57.872 "num_base_bdevs_operational": 2, 00:15:57.872 "base_bdevs_list": [ 00:15:57.872 { 00:15:57.872 "name": "pt1", 00:15:57.872 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.872 "is_configured": true, 00:15:57.872 "data_offset": 256, 00:15:57.872 "data_size": 7936 00:15:57.872 }, 00:15:57.872 { 00:15:57.872 "name": "pt2", 00:15:57.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.872 "is_configured": true, 00:15:57.872 "data_offset": 256, 00:15:57.872 "data_size": 7936 00:15:57.872 } 00:15:57.872 ] 00:15:57.872 }' 00:15:57.872 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.872 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.131 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:58.131 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:58.131 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:58.131 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:58.131 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:58.131 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:58.131 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:58.131 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:58.131 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.131 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.131 [2024-12-07 16:41:56.945950] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.131 16:41:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.131 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:58.131 "name": "raid_bdev1", 00:15:58.131 "aliases": [ 00:15:58.131 "77c03e10-34f0-44aa-95a8-8c6ce6270795" 00:15:58.131 ], 00:15:58.131 "product_name": "Raid Volume", 00:15:58.131 "block_size": 4096, 00:15:58.131 "num_blocks": 7936, 00:15:58.131 "uuid": "77c03e10-34f0-44aa-95a8-8c6ce6270795", 00:15:58.131 "assigned_rate_limits": { 00:15:58.131 "rw_ios_per_sec": 0, 00:15:58.131 "rw_mbytes_per_sec": 0, 00:15:58.131 "r_mbytes_per_sec": 0, 00:15:58.131 "w_mbytes_per_sec": 0 00:15:58.131 }, 00:15:58.131 "claimed": false, 00:15:58.131 "zoned": false, 00:15:58.131 "supported_io_types": { 00:15:58.132 "read": true, 00:15:58.132 "write": true, 00:15:58.132 "unmap": false, 00:15:58.132 "flush": false, 00:15:58.132 "reset": true, 00:15:58.132 "nvme_admin": false, 00:15:58.132 "nvme_io": false, 00:15:58.132 "nvme_io_md": false, 00:15:58.132 "write_zeroes": true, 00:15:58.132 "zcopy": false, 00:15:58.132 "get_zone_info": false, 00:15:58.132 "zone_management": false, 00:15:58.132 "zone_append": false, 00:15:58.132 "compare": false, 00:15:58.132 "compare_and_write": false, 00:15:58.132 "abort": false, 00:15:58.132 "seek_hole": false, 00:15:58.132 "seek_data": false, 00:15:58.132 "copy": false, 00:15:58.132 "nvme_iov_md": false 00:15:58.132 }, 00:15:58.132 "memory_domains": [ 00:15:58.132 { 00:15:58.132 "dma_device_id": "system", 00:15:58.132 "dma_device_type": 1 00:15:58.132 }, 00:15:58.132 { 00:15:58.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.132 "dma_device_type": 2 00:15:58.132 }, 00:15:58.132 { 00:15:58.132 "dma_device_id": "system", 00:15:58.132 "dma_device_type": 1 00:15:58.132 }, 00:15:58.132 { 00:15:58.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.132 "dma_device_type": 2 00:15:58.132 } 00:15:58.132 ], 00:15:58.132 "driver_specific": { 00:15:58.132 "raid": { 00:15:58.132 "uuid": "77c03e10-34f0-44aa-95a8-8c6ce6270795", 00:15:58.132 "strip_size_kb": 0, 00:15:58.132 "state": "online", 00:15:58.132 "raid_level": "raid1", 00:15:58.132 "superblock": true, 00:15:58.132 "num_base_bdevs": 2, 00:15:58.132 "num_base_bdevs_discovered": 2, 00:15:58.132 "num_base_bdevs_operational": 2, 00:15:58.132 "base_bdevs_list": [ 00:15:58.132 { 00:15:58.132 "name": "pt1", 00:15:58.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.132 "is_configured": true, 00:15:58.132 "data_offset": 256, 00:15:58.132 "data_size": 7936 00:15:58.132 }, 00:15:58.132 { 00:15:58.132 "name": "pt2", 00:15:58.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.132 "is_configured": true, 00:15:58.132 "data_offset": 256, 00:15:58.132 "data_size": 7936 00:15:58.132 } 00:15:58.132 ] 00:15:58.132 } 00:15:58.132 } 00:15:58.132 }' 00:15:58.132 16:41:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.132 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:58.132 pt2' 00:15:58.132 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:58.392 [2024-12-07 16:41:57.149514] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=77c03e10-34f0-44aa-95a8-8c6ce6270795 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 77c03e10-34f0-44aa-95a8-8c6ce6270795 ']' 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 [2024-12-07 16:41:57.201137] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.392 [2024-12-07 16:41:57.201175] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.392 [2024-12-07 16:41:57.201311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.392 [2024-12-07 16:41:57.201411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.392 [2024-12-07 16:41:57.201424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:58.392 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.652 [2024-12-07 16:41:57.344940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:58.652 [2024-12-07 16:41:57.347239] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:58.652 [2024-12-07 16:41:57.347392] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:58.652 [2024-12-07 16:41:57.347499] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:58.652 [2024-12-07 16:41:57.347567] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.652 [2024-12-07 16:41:57.347596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:58.652 request: 00:15:58.652 { 00:15:58.652 "name": "raid_bdev1", 00:15:58.652 "raid_level": "raid1", 00:15:58.652 "base_bdevs": [ 00:15:58.652 "malloc1", 00:15:58.652 "malloc2" 00:15:58.652 ], 00:15:58.652 "superblock": false, 00:15:58.652 "method": "bdev_raid_create", 00:15:58.652 "req_id": 1 00:15:58.652 } 00:15:58.652 Got JSON-RPC error response 00:15:58.652 response: 00:15:58.652 { 00:15:58.652 "code": -17, 00:15:58.652 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:58.652 } 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.652 [2024-12-07 16:41:57.416764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:58.652 [2024-12-07 16:41:57.416848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.652 [2024-12-07 16:41:57.416874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:58.652 [2024-12-07 16:41:57.416883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.652 [2024-12-07 16:41:57.419540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.652 [2024-12-07 16:41:57.419584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:58.652 [2024-12-07 16:41:57.419690] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:58.652 [2024-12-07 16:41:57.419753] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.652 pt1 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.652 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.653 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.653 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.653 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.653 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.653 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.653 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.653 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.653 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.653 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.653 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.653 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.653 "name": "raid_bdev1", 00:15:58.653 "uuid": "77c03e10-34f0-44aa-95a8-8c6ce6270795", 00:15:58.653 "strip_size_kb": 0, 00:15:58.653 "state": "configuring", 00:15:58.653 "raid_level": "raid1", 00:15:58.653 "superblock": true, 00:15:58.653 "num_base_bdevs": 2, 00:15:58.653 "num_base_bdevs_discovered": 1, 00:15:58.653 "num_base_bdevs_operational": 2, 00:15:58.653 "base_bdevs_list": [ 00:15:58.653 { 00:15:58.653 "name": "pt1", 00:15:58.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.653 "is_configured": true, 00:15:58.653 "data_offset": 256, 00:15:58.653 "data_size": 7936 00:15:58.653 }, 00:15:58.653 { 00:15:58.653 "name": null, 00:15:58.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.653 "is_configured": false, 00:15:58.653 "data_offset": 256, 00:15:58.653 "data_size": 7936 00:15:58.653 } 00:15:58.653 ] 00:15:58.653 }' 00:15:58.653 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.653 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.222 [2024-12-07 16:41:57.844062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:59.222 [2024-12-07 16:41:57.844236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.222 [2024-12-07 16:41:57.844287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:59.222 [2024-12-07 16:41:57.844318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.222 [2024-12-07 16:41:57.844917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.222 [2024-12-07 16:41:57.844978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:59.222 [2024-12-07 16:41:57.845102] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:59.222 [2024-12-07 16:41:57.845155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.222 [2024-12-07 16:41:57.845304] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:59.222 [2024-12-07 16:41:57.845357] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:59.222 [2024-12-07 16:41:57.845650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:59.222 [2024-12-07 16:41:57.845821] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:59.222 [2024-12-07 16:41:57.845868] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:59.222 [2024-12-07 16:41:57.846023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.222 pt2 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.222 "name": "raid_bdev1", 00:15:59.222 "uuid": "77c03e10-34f0-44aa-95a8-8c6ce6270795", 00:15:59.222 "strip_size_kb": 0, 00:15:59.222 "state": "online", 00:15:59.222 "raid_level": "raid1", 00:15:59.222 "superblock": true, 00:15:59.222 "num_base_bdevs": 2, 00:15:59.222 "num_base_bdevs_discovered": 2, 00:15:59.222 "num_base_bdevs_operational": 2, 00:15:59.222 "base_bdevs_list": [ 00:15:59.222 { 00:15:59.222 "name": "pt1", 00:15:59.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.222 "is_configured": true, 00:15:59.222 "data_offset": 256, 00:15:59.222 "data_size": 7936 00:15:59.222 }, 00:15:59.222 { 00:15:59.222 "name": "pt2", 00:15:59.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.222 "is_configured": true, 00:15:59.222 "data_offset": 256, 00:15:59.222 "data_size": 7936 00:15:59.222 } 00:15:59.222 ] 00:15:59.222 }' 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.222 16:41:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.482 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:59.482 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:59.482 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:59.482 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:59.482 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:59.482 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:59.482 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.482 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:59.482 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.482 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.482 [2024-12-07 16:41:58.311585] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.482 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.482 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:59.482 "name": "raid_bdev1", 00:15:59.482 "aliases": [ 00:15:59.482 "77c03e10-34f0-44aa-95a8-8c6ce6270795" 00:15:59.482 ], 00:15:59.482 "product_name": "Raid Volume", 00:15:59.482 "block_size": 4096, 00:15:59.482 "num_blocks": 7936, 00:15:59.482 "uuid": "77c03e10-34f0-44aa-95a8-8c6ce6270795", 00:15:59.482 "assigned_rate_limits": { 00:15:59.482 "rw_ios_per_sec": 0, 00:15:59.482 "rw_mbytes_per_sec": 0, 00:15:59.482 "r_mbytes_per_sec": 0, 00:15:59.482 "w_mbytes_per_sec": 0 00:15:59.482 }, 00:15:59.482 "claimed": false, 00:15:59.482 "zoned": false, 00:15:59.482 "supported_io_types": { 00:15:59.482 "read": true, 00:15:59.482 "write": true, 00:15:59.482 "unmap": false, 00:15:59.482 "flush": false, 00:15:59.482 "reset": true, 00:15:59.482 "nvme_admin": false, 00:15:59.482 "nvme_io": false, 00:15:59.482 "nvme_io_md": false, 00:15:59.482 "write_zeroes": true, 00:15:59.482 "zcopy": false, 00:15:59.482 "get_zone_info": false, 00:15:59.482 "zone_management": false, 00:15:59.482 "zone_append": false, 00:15:59.482 "compare": false, 00:15:59.482 "compare_and_write": false, 00:15:59.482 "abort": false, 00:15:59.482 "seek_hole": false, 00:15:59.482 "seek_data": false, 00:15:59.482 "copy": false, 00:15:59.482 "nvme_iov_md": false 00:15:59.482 }, 00:15:59.482 "memory_domains": [ 00:15:59.482 { 00:15:59.482 "dma_device_id": "system", 00:15:59.482 "dma_device_type": 1 00:15:59.482 }, 00:15:59.482 { 00:15:59.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.482 "dma_device_type": 2 00:15:59.482 }, 00:15:59.482 { 00:15:59.482 "dma_device_id": "system", 00:15:59.482 "dma_device_type": 1 00:15:59.482 }, 00:15:59.482 { 00:15:59.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.482 "dma_device_type": 2 00:15:59.482 } 00:15:59.482 ], 00:15:59.482 "driver_specific": { 00:15:59.482 "raid": { 00:15:59.482 "uuid": "77c03e10-34f0-44aa-95a8-8c6ce6270795", 00:15:59.482 "strip_size_kb": 0, 00:15:59.483 "state": "online", 00:15:59.483 "raid_level": "raid1", 00:15:59.483 "superblock": true, 00:15:59.483 "num_base_bdevs": 2, 00:15:59.483 "num_base_bdevs_discovered": 2, 00:15:59.483 "num_base_bdevs_operational": 2, 00:15:59.483 "base_bdevs_list": [ 00:15:59.483 { 00:15:59.483 "name": "pt1", 00:15:59.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.483 "is_configured": true, 00:15:59.483 "data_offset": 256, 00:15:59.483 "data_size": 7936 00:15:59.483 }, 00:15:59.483 { 00:15:59.483 "name": "pt2", 00:15:59.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.483 "is_configured": true, 00:15:59.483 "data_offset": 256, 00:15:59.483 "data_size": 7936 00:15:59.483 } 00:15:59.483 ] 00:15:59.483 } 00:15:59.483 } 00:15:59.483 }' 00:15:59.483 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:59.743 pt2' 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.743 [2024-12-07 16:41:58.543165] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 77c03e10-34f0-44aa-95a8-8c6ce6270795 '!=' 77c03e10-34f0-44aa-95a8-8c6ce6270795 ']' 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.743 [2024-12-07 16:41:58.590822] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.743 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.744 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.003 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.003 "name": "raid_bdev1", 00:16:00.003 "uuid": "77c03e10-34f0-44aa-95a8-8c6ce6270795", 00:16:00.003 "strip_size_kb": 0, 00:16:00.003 "state": "online", 00:16:00.003 "raid_level": "raid1", 00:16:00.003 "superblock": true, 00:16:00.003 "num_base_bdevs": 2, 00:16:00.003 "num_base_bdevs_discovered": 1, 00:16:00.004 "num_base_bdevs_operational": 1, 00:16:00.004 "base_bdevs_list": [ 00:16:00.004 { 00:16:00.004 "name": null, 00:16:00.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.004 "is_configured": false, 00:16:00.004 "data_offset": 0, 00:16:00.004 "data_size": 7936 00:16:00.004 }, 00:16:00.004 { 00:16:00.004 "name": "pt2", 00:16:00.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.004 "is_configured": true, 00:16:00.004 "data_offset": 256, 00:16:00.004 "data_size": 7936 00:16:00.004 } 00:16:00.004 ] 00:16:00.004 }' 00:16:00.004 16:41:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.004 16:41:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.262 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:00.262 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.262 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.263 [2024-12-07 16:41:59.057931] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.263 [2024-12-07 16:41:59.057975] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.263 [2024-12-07 16:41:59.058093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.263 [2024-12-07 16:41:59.058152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.263 [2024-12-07 16:41:59.058162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.263 [2024-12-07 16:41:59.125839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:00.263 [2024-12-07 16:41:59.125935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.263 [2024-12-07 16:41:59.125960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:00.263 [2024-12-07 16:41:59.125970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.263 [2024-12-07 16:41:59.128693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.263 [2024-12-07 16:41:59.128811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:00.263 [2024-12-07 16:41:59.128939] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:00.263 [2024-12-07 16:41:59.128983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:00.263 [2024-12-07 16:41:59.129084] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:00.263 [2024-12-07 16:41:59.129092] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:00.263 [2024-12-07 16:41:59.129376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:00.263 [2024-12-07 16:41:59.129515] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:00.263 [2024-12-07 16:41:59.129528] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:00.263 [2024-12-07 16:41:59.129658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.263 pt2 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.263 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.521 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.521 "name": "raid_bdev1", 00:16:00.521 "uuid": "77c03e10-34f0-44aa-95a8-8c6ce6270795", 00:16:00.521 "strip_size_kb": 0, 00:16:00.521 "state": "online", 00:16:00.521 "raid_level": "raid1", 00:16:00.521 "superblock": true, 00:16:00.521 "num_base_bdevs": 2, 00:16:00.521 "num_base_bdevs_discovered": 1, 00:16:00.521 "num_base_bdevs_operational": 1, 00:16:00.521 "base_bdevs_list": [ 00:16:00.521 { 00:16:00.521 "name": null, 00:16:00.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.521 "is_configured": false, 00:16:00.521 "data_offset": 256, 00:16:00.521 "data_size": 7936 00:16:00.521 }, 00:16:00.521 { 00:16:00.521 "name": "pt2", 00:16:00.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.521 "is_configured": true, 00:16:00.521 "data_offset": 256, 00:16:00.521 "data_size": 7936 00:16:00.521 } 00:16:00.521 ] 00:16:00.521 }' 00:16:00.521 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.521 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.787 [2024-12-07 16:41:59.553130] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.787 [2024-12-07 16:41:59.553236] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.787 [2024-12-07 16:41:59.553374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.787 [2024-12-07 16:41:59.553455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.787 [2024-12-07 16:41:59.553529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.787 [2024-12-07 16:41:59.612978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:00.787 [2024-12-07 16:41:59.613123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.787 [2024-12-07 16:41:59.613171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:00.787 [2024-12-07 16:41:59.613214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.787 [2024-12-07 16:41:59.615840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.787 [2024-12-07 16:41:59.615925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:00.787 [2024-12-07 16:41:59.616050] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:00.787 [2024-12-07 16:41:59.616104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:00.787 [2024-12-07 16:41:59.616230] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:00.787 [2024-12-07 16:41:59.616245] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.787 [2024-12-07 16:41:59.616264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:00.787 [2024-12-07 16:41:59.616314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:00.787 [2024-12-07 16:41:59.616418] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:00.787 [2024-12-07 16:41:59.616429] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:00.787 [2024-12-07 16:41:59.616677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:00.787 [2024-12-07 16:41:59.616797] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:00.787 [2024-12-07 16:41:59.616807] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:00.787 [2024-12-07 16:41:59.616973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.787 pt1 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.787 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.787 "name": "raid_bdev1", 00:16:00.787 "uuid": "77c03e10-34f0-44aa-95a8-8c6ce6270795", 00:16:00.787 "strip_size_kb": 0, 00:16:00.787 "state": "online", 00:16:00.787 "raid_level": "raid1", 00:16:00.787 "superblock": true, 00:16:00.787 "num_base_bdevs": 2, 00:16:00.787 "num_base_bdevs_discovered": 1, 00:16:00.787 "num_base_bdevs_operational": 1, 00:16:00.788 "base_bdevs_list": [ 00:16:00.788 { 00:16:00.788 "name": null, 00:16:00.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.788 "is_configured": false, 00:16:00.788 "data_offset": 256, 00:16:00.788 "data_size": 7936 00:16:00.788 }, 00:16:00.788 { 00:16:00.788 "name": "pt2", 00:16:00.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.788 "is_configured": true, 00:16:00.788 "data_offset": 256, 00:16:00.788 "data_size": 7936 00:16:00.788 } 00:16:00.788 ] 00:16:00.788 }' 00:16:00.788 16:41:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.788 16:41:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.369 [2024-12-07 16:42:00.100492] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 77c03e10-34f0-44aa-95a8-8c6ce6270795 '!=' 77c03e10-34f0-44aa-95a8-8c6ce6270795 ']' 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96899 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96899 ']' 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96899 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96899 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:01.369 killing process with pid 96899 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96899' 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96899 00:16:01.369 [2024-12-07 16:42:00.160939] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.369 [2024-12-07 16:42:00.161058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.369 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96899 00:16:01.369 [2024-12-07 16:42:00.161120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.369 [2024-12-07 16:42:00.161130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:01.369 [2024-12-07 16:42:00.203637] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.939 16:42:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:01.939 00:16:01.939 real 0m5.155s 00:16:01.939 user 0m8.174s 00:16:01.939 sys 0m1.221s 00:16:01.939 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:01.939 16:42:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.939 ************************************ 00:16:01.939 END TEST raid_superblock_test_4k 00:16:01.939 ************************************ 00:16:01.939 16:42:00 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:01.939 16:42:00 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:01.940 16:42:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:01.940 16:42:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:01.940 16:42:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.940 ************************************ 00:16:01.940 START TEST raid_rebuild_test_sb_4k 00:16:01.940 ************************************ 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=97216 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 97216 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 97216 ']' 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.940 16:42:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.940 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:01.940 Zero copy mechanism will not be used. 00:16:01.940 [2024-12-07 16:42:00.758753] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:01.940 [2024-12-07 16:42:00.758905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97216 ] 00:16:02.201 [2024-12-07 16:42:00.919746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.201 [2024-12-07 16:42:01.000451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.201 [2024-12-07 16:42:01.077629] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.201 [2024-12-07 16:42:01.077680] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.770 BaseBdev1_malloc 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.770 [2024-12-07 16:42:01.637082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:02.770 [2024-12-07 16:42:01.637177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.770 [2024-12-07 16:42:01.637207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:02.770 [2024-12-07 16:42:01.637235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.770 [2024-12-07 16:42:01.639821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.770 [2024-12-07 16:42:01.639909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:02.770 BaseBdev1 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.770 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.030 BaseBdev2_malloc 00:16:03.030 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.030 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:03.030 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.030 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.030 [2024-12-07 16:42:01.693988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:03.030 [2024-12-07 16:42:01.694079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.030 [2024-12-07 16:42:01.694114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:03.030 [2024-12-07 16:42:01.694128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.030 [2024-12-07 16:42:01.697655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.030 BaseBdev2 00:16:03.030 [2024-12-07 16:42:01.697790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:03.030 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.030 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:03.030 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.030 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.030 spare_malloc 00:16:03.030 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.030 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.031 spare_delay 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.031 [2024-12-07 16:42:01.741641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:03.031 [2024-12-07 16:42:01.741745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.031 [2024-12-07 16:42:01.741775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:03.031 [2024-12-07 16:42:01.741785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.031 [2024-12-07 16:42:01.744448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.031 [2024-12-07 16:42:01.744531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:03.031 spare 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.031 [2024-12-07 16:42:01.753681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.031 [2024-12-07 16:42:01.755938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.031 [2024-12-07 16:42:01.756182] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:03.031 [2024-12-07 16:42:01.756199] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:03.031 [2024-12-07 16:42:01.756515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:03.031 [2024-12-07 16:42:01.756695] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:03.031 [2024-12-07 16:42:01.756709] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:03.031 [2024-12-07 16:42:01.756881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.031 "name": "raid_bdev1", 00:16:03.031 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:03.031 "strip_size_kb": 0, 00:16:03.031 "state": "online", 00:16:03.031 "raid_level": "raid1", 00:16:03.031 "superblock": true, 00:16:03.031 "num_base_bdevs": 2, 00:16:03.031 "num_base_bdevs_discovered": 2, 00:16:03.031 "num_base_bdevs_operational": 2, 00:16:03.031 "base_bdevs_list": [ 00:16:03.031 { 00:16:03.031 "name": "BaseBdev1", 00:16:03.031 "uuid": "c2f43568-0b99-50f6-9058-de59d61c1228", 00:16:03.031 "is_configured": true, 00:16:03.031 "data_offset": 256, 00:16:03.031 "data_size": 7936 00:16:03.031 }, 00:16:03.031 { 00:16:03.031 "name": "BaseBdev2", 00:16:03.031 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:03.031 "is_configured": true, 00:16:03.031 "data_offset": 256, 00:16:03.031 "data_size": 7936 00:16:03.031 } 00:16:03.031 ] 00:16:03.031 }' 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.031 16:42:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.600 [2024-12-07 16:42:02.217185] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.600 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:03.600 [2024-12-07 16:42:02.476508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:03.600 /dev/nbd0 00:16:03.859 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.860 1+0 records in 00:16:03.860 1+0 records out 00:16:03.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609178 s, 6.7 MB/s 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:03.860 16:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:04.430 7936+0 records in 00:16:04.430 7936+0 records out 00:16:04.430 32505856 bytes (33 MB, 31 MiB) copied, 0.63449 s, 51.2 MB/s 00:16:04.430 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:04.430 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.430 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:04.430 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.430 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:04.430 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.430 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:04.691 [2024-12-07 16:42:03.393746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.691 [2024-12-07 16:42:03.413884] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.691 "name": "raid_bdev1", 00:16:04.691 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:04.691 "strip_size_kb": 0, 00:16:04.691 "state": "online", 00:16:04.691 "raid_level": "raid1", 00:16:04.691 "superblock": true, 00:16:04.691 "num_base_bdevs": 2, 00:16:04.691 "num_base_bdevs_discovered": 1, 00:16:04.691 "num_base_bdevs_operational": 1, 00:16:04.691 "base_bdevs_list": [ 00:16:04.691 { 00:16:04.691 "name": null, 00:16:04.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.691 "is_configured": false, 00:16:04.691 "data_offset": 0, 00:16:04.691 "data_size": 7936 00:16:04.691 }, 00:16:04.691 { 00:16:04.691 "name": "BaseBdev2", 00:16:04.691 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:04.691 "is_configured": true, 00:16:04.691 "data_offset": 256, 00:16:04.691 "data_size": 7936 00:16:04.691 } 00:16:04.691 ] 00:16:04.691 }' 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.691 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.261 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:05.261 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.261 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.262 [2024-12-07 16:42:03.893051] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.262 [2024-12-07 16:42:03.900595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:16:05.262 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.262 16:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:05.262 [2024-12-07 16:42:03.902980] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.205 "name": "raid_bdev1", 00:16:06.205 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:06.205 "strip_size_kb": 0, 00:16:06.205 "state": "online", 00:16:06.205 "raid_level": "raid1", 00:16:06.205 "superblock": true, 00:16:06.205 "num_base_bdevs": 2, 00:16:06.205 "num_base_bdevs_discovered": 2, 00:16:06.205 "num_base_bdevs_operational": 2, 00:16:06.205 "process": { 00:16:06.205 "type": "rebuild", 00:16:06.205 "target": "spare", 00:16:06.205 "progress": { 00:16:06.205 "blocks": 2560, 00:16:06.205 "percent": 32 00:16:06.205 } 00:16:06.205 }, 00:16:06.205 "base_bdevs_list": [ 00:16:06.205 { 00:16:06.205 "name": "spare", 00:16:06.205 "uuid": "4ec9b151-bfef-59b0-9dce-07a167190a13", 00:16:06.205 "is_configured": true, 00:16:06.205 "data_offset": 256, 00:16:06.205 "data_size": 7936 00:16:06.205 }, 00:16:06.205 { 00:16:06.205 "name": "BaseBdev2", 00:16:06.205 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:06.205 "is_configured": true, 00:16:06.205 "data_offset": 256, 00:16:06.205 "data_size": 7936 00:16:06.205 } 00:16:06.205 ] 00:16:06.205 }' 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.205 16:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.205 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.205 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:06.205 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.205 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.205 [2024-12-07 16:42:05.047578] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.464 [2024-12-07 16:42:05.113194] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:06.465 [2024-12-07 16:42:05.113289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.465 [2024-12-07 16:42:05.113313] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.465 [2024-12-07 16:42:05.113322] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.465 "name": "raid_bdev1", 00:16:06.465 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:06.465 "strip_size_kb": 0, 00:16:06.465 "state": "online", 00:16:06.465 "raid_level": "raid1", 00:16:06.465 "superblock": true, 00:16:06.465 "num_base_bdevs": 2, 00:16:06.465 "num_base_bdevs_discovered": 1, 00:16:06.465 "num_base_bdevs_operational": 1, 00:16:06.465 "base_bdevs_list": [ 00:16:06.465 { 00:16:06.465 "name": null, 00:16:06.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.465 "is_configured": false, 00:16:06.465 "data_offset": 0, 00:16:06.465 "data_size": 7936 00:16:06.465 }, 00:16:06.465 { 00:16:06.465 "name": "BaseBdev2", 00:16:06.465 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:06.465 "is_configured": true, 00:16:06.465 "data_offset": 256, 00:16:06.465 "data_size": 7936 00:16:06.465 } 00:16:06.465 ] 00:16:06.465 }' 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.465 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.724 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.724 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.724 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.724 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.724 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.724 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.724 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.724 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.724 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.724 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.724 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.724 "name": "raid_bdev1", 00:16:06.724 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:06.724 "strip_size_kb": 0, 00:16:06.724 "state": "online", 00:16:06.724 "raid_level": "raid1", 00:16:06.724 "superblock": true, 00:16:06.724 "num_base_bdevs": 2, 00:16:06.724 "num_base_bdevs_discovered": 1, 00:16:06.724 "num_base_bdevs_operational": 1, 00:16:06.724 "base_bdevs_list": [ 00:16:06.724 { 00:16:06.724 "name": null, 00:16:06.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.724 "is_configured": false, 00:16:06.724 "data_offset": 0, 00:16:06.724 "data_size": 7936 00:16:06.724 }, 00:16:06.724 { 00:16:06.724 "name": "BaseBdev2", 00:16:06.724 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:06.724 "is_configured": true, 00:16:06.724 "data_offset": 256, 00:16:06.724 "data_size": 7936 00:16:06.724 } 00:16:06.724 ] 00:16:06.724 }' 00:16:06.724 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.983 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.983 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.983 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.983 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:06.983 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.983 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.983 [2024-12-07 16:42:05.692528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.983 [2024-12-07 16:42:05.700081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:16:06.983 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.983 16:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:06.983 [2024-12-07 16:42:05.702395] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.920 "name": "raid_bdev1", 00:16:07.920 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:07.920 "strip_size_kb": 0, 00:16:07.920 "state": "online", 00:16:07.920 "raid_level": "raid1", 00:16:07.920 "superblock": true, 00:16:07.920 "num_base_bdevs": 2, 00:16:07.920 "num_base_bdevs_discovered": 2, 00:16:07.920 "num_base_bdevs_operational": 2, 00:16:07.920 "process": { 00:16:07.920 "type": "rebuild", 00:16:07.920 "target": "spare", 00:16:07.920 "progress": { 00:16:07.920 "blocks": 2560, 00:16:07.920 "percent": 32 00:16:07.920 } 00:16:07.920 }, 00:16:07.920 "base_bdevs_list": [ 00:16:07.920 { 00:16:07.920 "name": "spare", 00:16:07.920 "uuid": "4ec9b151-bfef-59b0-9dce-07a167190a13", 00:16:07.920 "is_configured": true, 00:16:07.920 "data_offset": 256, 00:16:07.920 "data_size": 7936 00:16:07.920 }, 00:16:07.920 { 00:16:07.920 "name": "BaseBdev2", 00:16:07.920 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:07.920 "is_configured": true, 00:16:07.920 "data_offset": 256, 00:16:07.920 "data_size": 7936 00:16:07.920 } 00:16:07.920 ] 00:16:07.920 }' 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.920 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:08.180 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=578 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.180 "name": "raid_bdev1", 00:16:08.180 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:08.180 "strip_size_kb": 0, 00:16:08.180 "state": "online", 00:16:08.180 "raid_level": "raid1", 00:16:08.180 "superblock": true, 00:16:08.180 "num_base_bdevs": 2, 00:16:08.180 "num_base_bdevs_discovered": 2, 00:16:08.180 "num_base_bdevs_operational": 2, 00:16:08.180 "process": { 00:16:08.180 "type": "rebuild", 00:16:08.180 "target": "spare", 00:16:08.180 "progress": { 00:16:08.180 "blocks": 2816, 00:16:08.180 "percent": 35 00:16:08.180 } 00:16:08.180 }, 00:16:08.180 "base_bdevs_list": [ 00:16:08.180 { 00:16:08.180 "name": "spare", 00:16:08.180 "uuid": "4ec9b151-bfef-59b0-9dce-07a167190a13", 00:16:08.180 "is_configured": true, 00:16:08.180 "data_offset": 256, 00:16:08.180 "data_size": 7936 00:16:08.180 }, 00:16:08.180 { 00:16:08.180 "name": "BaseBdev2", 00:16:08.180 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:08.180 "is_configured": true, 00:16:08.180 "data_offset": 256, 00:16:08.180 "data_size": 7936 00:16:08.180 } 00:16:08.180 ] 00:16:08.180 }' 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.180 16:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.180 16:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.180 16:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.562 "name": "raid_bdev1", 00:16:09.562 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:09.562 "strip_size_kb": 0, 00:16:09.562 "state": "online", 00:16:09.562 "raid_level": "raid1", 00:16:09.562 "superblock": true, 00:16:09.562 "num_base_bdevs": 2, 00:16:09.562 "num_base_bdevs_discovered": 2, 00:16:09.562 "num_base_bdevs_operational": 2, 00:16:09.562 "process": { 00:16:09.562 "type": "rebuild", 00:16:09.562 "target": "spare", 00:16:09.562 "progress": { 00:16:09.562 "blocks": 5888, 00:16:09.562 "percent": 74 00:16:09.562 } 00:16:09.562 }, 00:16:09.562 "base_bdevs_list": [ 00:16:09.562 { 00:16:09.562 "name": "spare", 00:16:09.562 "uuid": "4ec9b151-bfef-59b0-9dce-07a167190a13", 00:16:09.562 "is_configured": true, 00:16:09.562 "data_offset": 256, 00:16:09.562 "data_size": 7936 00:16:09.562 }, 00:16:09.562 { 00:16:09.562 "name": "BaseBdev2", 00:16:09.562 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:09.562 "is_configured": true, 00:16:09.562 "data_offset": 256, 00:16:09.562 "data_size": 7936 00:16:09.562 } 00:16:09.562 ] 00:16:09.562 }' 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.562 16:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.132 [2024-12-07 16:42:08.827050] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:10.132 [2024-12-07 16:42:08.827186] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:10.132 [2024-12-07 16:42:08.827356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.393 "name": "raid_bdev1", 00:16:10.393 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:10.393 "strip_size_kb": 0, 00:16:10.393 "state": "online", 00:16:10.393 "raid_level": "raid1", 00:16:10.393 "superblock": true, 00:16:10.393 "num_base_bdevs": 2, 00:16:10.393 "num_base_bdevs_discovered": 2, 00:16:10.393 "num_base_bdevs_operational": 2, 00:16:10.393 "base_bdevs_list": [ 00:16:10.393 { 00:16:10.393 "name": "spare", 00:16:10.393 "uuid": "4ec9b151-bfef-59b0-9dce-07a167190a13", 00:16:10.393 "is_configured": true, 00:16:10.393 "data_offset": 256, 00:16:10.393 "data_size": 7936 00:16:10.393 }, 00:16:10.393 { 00:16:10.393 "name": "BaseBdev2", 00:16:10.393 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:10.393 "is_configured": true, 00:16:10.393 "data_offset": 256, 00:16:10.393 "data_size": 7936 00:16:10.393 } 00:16:10.393 ] 00:16:10.393 }' 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:10.393 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.653 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.654 "name": "raid_bdev1", 00:16:10.654 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:10.654 "strip_size_kb": 0, 00:16:10.654 "state": "online", 00:16:10.654 "raid_level": "raid1", 00:16:10.654 "superblock": true, 00:16:10.654 "num_base_bdevs": 2, 00:16:10.654 "num_base_bdevs_discovered": 2, 00:16:10.654 "num_base_bdevs_operational": 2, 00:16:10.654 "base_bdevs_list": [ 00:16:10.654 { 00:16:10.654 "name": "spare", 00:16:10.654 "uuid": "4ec9b151-bfef-59b0-9dce-07a167190a13", 00:16:10.654 "is_configured": true, 00:16:10.654 "data_offset": 256, 00:16:10.654 "data_size": 7936 00:16:10.654 }, 00:16:10.654 { 00:16:10.654 "name": "BaseBdev2", 00:16:10.654 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:10.654 "is_configured": true, 00:16:10.654 "data_offset": 256, 00:16:10.654 "data_size": 7936 00:16:10.654 } 00:16:10.654 ] 00:16:10.654 }' 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.654 "name": "raid_bdev1", 00:16:10.654 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:10.654 "strip_size_kb": 0, 00:16:10.654 "state": "online", 00:16:10.654 "raid_level": "raid1", 00:16:10.654 "superblock": true, 00:16:10.654 "num_base_bdevs": 2, 00:16:10.654 "num_base_bdevs_discovered": 2, 00:16:10.654 "num_base_bdevs_operational": 2, 00:16:10.654 "base_bdevs_list": [ 00:16:10.654 { 00:16:10.654 "name": "spare", 00:16:10.654 "uuid": "4ec9b151-bfef-59b0-9dce-07a167190a13", 00:16:10.654 "is_configured": true, 00:16:10.654 "data_offset": 256, 00:16:10.654 "data_size": 7936 00:16:10.654 }, 00:16:10.654 { 00:16:10.654 "name": "BaseBdev2", 00:16:10.654 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:10.654 "is_configured": true, 00:16:10.654 "data_offset": 256, 00:16:10.654 "data_size": 7936 00:16:10.654 } 00:16:10.654 ] 00:16:10.654 }' 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.654 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.225 [2024-12-07 16:42:09.889488] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.225 [2024-12-07 16:42:09.889532] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.225 [2024-12-07 16:42:09.889657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.225 [2024-12-07 16:42:09.889735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.225 [2024-12-07 16:42:09.889749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.225 16:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:11.486 /dev/nbd0 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.486 1+0 records in 00:16:11.486 1+0 records out 00:16:11.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418871 s, 9.8 MB/s 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.486 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:11.747 /dev/nbd1 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.747 1+0 records in 00:16:11.747 1+0 records out 00:16:11.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504791 s, 8.1 MB/s 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.747 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:12.024 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:12.024 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:12.024 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:12.024 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.024 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.024 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:12.024 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:12.024 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.024 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.024 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:12.285 16:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.285 [2024-12-07 16:42:11.028773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:12.285 [2024-12-07 16:42:11.028869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.285 [2024-12-07 16:42:11.028898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:12.285 [2024-12-07 16:42:11.028913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.285 [2024-12-07 16:42:11.031500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.285 [2024-12-07 16:42:11.031542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:12.285 [2024-12-07 16:42:11.031650] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:12.285 [2024-12-07 16:42:11.031719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.285 [2024-12-07 16:42:11.031841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.285 spare 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.285 [2024-12-07 16:42:11.131781] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:12.285 [2024-12-07 16:42:11.131850] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:12.285 [2024-12-07 16:42:11.132298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:16:12.285 [2024-12-07 16:42:11.132537] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:12.285 [2024-12-07 16:42:11.132563] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:12.285 [2024-12-07 16:42:11.132780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.285 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.546 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.546 "name": "raid_bdev1", 00:16:12.546 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:12.546 "strip_size_kb": 0, 00:16:12.546 "state": "online", 00:16:12.546 "raid_level": "raid1", 00:16:12.546 "superblock": true, 00:16:12.546 "num_base_bdevs": 2, 00:16:12.546 "num_base_bdevs_discovered": 2, 00:16:12.546 "num_base_bdevs_operational": 2, 00:16:12.546 "base_bdevs_list": [ 00:16:12.546 { 00:16:12.546 "name": "spare", 00:16:12.546 "uuid": "4ec9b151-bfef-59b0-9dce-07a167190a13", 00:16:12.546 "is_configured": true, 00:16:12.546 "data_offset": 256, 00:16:12.546 "data_size": 7936 00:16:12.546 }, 00:16:12.546 { 00:16:12.546 "name": "BaseBdev2", 00:16:12.546 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:12.546 "is_configured": true, 00:16:12.546 "data_offset": 256, 00:16:12.546 "data_size": 7936 00:16:12.546 } 00:16:12.546 ] 00:16:12.546 }' 00:16:12.546 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.546 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.806 "name": "raid_bdev1", 00:16:12.806 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:12.806 "strip_size_kb": 0, 00:16:12.806 "state": "online", 00:16:12.806 "raid_level": "raid1", 00:16:12.806 "superblock": true, 00:16:12.806 "num_base_bdevs": 2, 00:16:12.806 "num_base_bdevs_discovered": 2, 00:16:12.806 "num_base_bdevs_operational": 2, 00:16:12.806 "base_bdevs_list": [ 00:16:12.806 { 00:16:12.806 "name": "spare", 00:16:12.806 "uuid": "4ec9b151-bfef-59b0-9dce-07a167190a13", 00:16:12.806 "is_configured": true, 00:16:12.806 "data_offset": 256, 00:16:12.806 "data_size": 7936 00:16:12.806 }, 00:16:12.806 { 00:16:12.806 "name": "BaseBdev2", 00:16:12.806 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:12.806 "is_configured": true, 00:16:12.806 "data_offset": 256, 00:16:12.806 "data_size": 7936 00:16:12.806 } 00:16:12.806 ] 00:16:12.806 }' 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.806 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.067 [2024-12-07 16:42:11.767737] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.067 "name": "raid_bdev1", 00:16:13.067 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:13.067 "strip_size_kb": 0, 00:16:13.067 "state": "online", 00:16:13.067 "raid_level": "raid1", 00:16:13.067 "superblock": true, 00:16:13.067 "num_base_bdevs": 2, 00:16:13.067 "num_base_bdevs_discovered": 1, 00:16:13.067 "num_base_bdevs_operational": 1, 00:16:13.067 "base_bdevs_list": [ 00:16:13.067 { 00:16:13.067 "name": null, 00:16:13.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.067 "is_configured": false, 00:16:13.067 "data_offset": 0, 00:16:13.067 "data_size": 7936 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "name": "BaseBdev2", 00:16:13.067 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:13.067 "is_configured": true, 00:16:13.067 "data_offset": 256, 00:16:13.067 "data_size": 7936 00:16:13.067 } 00:16:13.067 ] 00:16:13.067 }' 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.067 16:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.327 16:42:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:13.327 16:42:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.327 16:42:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.587 [2024-12-07 16:42:12.227052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.587 [2024-12-07 16:42:12.227302] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:13.587 [2024-12-07 16:42:12.227321] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:13.587 [2024-12-07 16:42:12.227383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.587 [2024-12-07 16:42:12.234657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:16:13.587 16:42:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.587 16:42:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:13.587 [2024-12-07 16:42:12.237001] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.528 "name": "raid_bdev1", 00:16:14.528 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:14.528 "strip_size_kb": 0, 00:16:14.528 "state": "online", 00:16:14.528 "raid_level": "raid1", 00:16:14.528 "superblock": true, 00:16:14.528 "num_base_bdevs": 2, 00:16:14.528 "num_base_bdevs_discovered": 2, 00:16:14.528 "num_base_bdevs_operational": 2, 00:16:14.528 "process": { 00:16:14.528 "type": "rebuild", 00:16:14.528 "target": "spare", 00:16:14.528 "progress": { 00:16:14.528 "blocks": 2560, 00:16:14.528 "percent": 32 00:16:14.528 } 00:16:14.528 }, 00:16:14.528 "base_bdevs_list": [ 00:16:14.528 { 00:16:14.528 "name": "spare", 00:16:14.528 "uuid": "4ec9b151-bfef-59b0-9dce-07a167190a13", 00:16:14.528 "is_configured": true, 00:16:14.528 "data_offset": 256, 00:16:14.528 "data_size": 7936 00:16:14.528 }, 00:16:14.528 { 00:16:14.528 "name": "BaseBdev2", 00:16:14.528 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:14.528 "is_configured": true, 00:16:14.528 "data_offset": 256, 00:16:14.528 "data_size": 7936 00:16:14.528 } 00:16:14.528 ] 00:16:14.528 }' 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.528 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.528 [2024-12-07 16:42:13.385590] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.789 [2024-12-07 16:42:13.446199] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:14.789 [2024-12-07 16:42:13.446297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.789 [2024-12-07 16:42:13.446317] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.789 [2024-12-07 16:42:13.446326] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.789 "name": "raid_bdev1", 00:16:14.789 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:14.789 "strip_size_kb": 0, 00:16:14.789 "state": "online", 00:16:14.789 "raid_level": "raid1", 00:16:14.789 "superblock": true, 00:16:14.789 "num_base_bdevs": 2, 00:16:14.789 "num_base_bdevs_discovered": 1, 00:16:14.789 "num_base_bdevs_operational": 1, 00:16:14.789 "base_bdevs_list": [ 00:16:14.789 { 00:16:14.789 "name": null, 00:16:14.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.789 "is_configured": false, 00:16:14.789 "data_offset": 0, 00:16:14.789 "data_size": 7936 00:16:14.789 }, 00:16:14.789 { 00:16:14.789 "name": "BaseBdev2", 00:16:14.789 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:14.789 "is_configured": true, 00:16:14.789 "data_offset": 256, 00:16:14.789 "data_size": 7936 00:16:14.789 } 00:16:14.789 ] 00:16:14.789 }' 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.789 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.049 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:15.049 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.049 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.049 [2024-12-07 16:42:13.933385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:15.049 [2024-12-07 16:42:13.933472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.049 [2024-12-07 16:42:13.933502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:15.049 [2024-12-07 16:42:13.933513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.049 [2024-12-07 16:42:13.934072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.049 [2024-12-07 16:42:13.934096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:15.049 [2024-12-07 16:42:13.934205] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:15.049 [2024-12-07 16:42:13.934224] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:15.049 [2024-12-07 16:42:13.934244] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:15.049 [2024-12-07 16:42:13.934267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.050 [2024-12-07 16:42:13.941624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:15.050 spare 00:16:15.050 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.050 16:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:15.050 [2024-12-07 16:42:13.943999] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:16.431 16:42:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.431 16:42:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.432 16:42:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.432 16:42:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.432 16:42:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.432 16:42:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.432 16:42:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.432 16:42:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.432 16:42:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 16:42:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.432 16:42:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.432 "name": "raid_bdev1", 00:16:16.432 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:16.432 "strip_size_kb": 0, 00:16:16.432 "state": "online", 00:16:16.432 "raid_level": "raid1", 00:16:16.432 "superblock": true, 00:16:16.432 "num_base_bdevs": 2, 00:16:16.432 "num_base_bdevs_discovered": 2, 00:16:16.432 "num_base_bdevs_operational": 2, 00:16:16.432 "process": { 00:16:16.432 "type": "rebuild", 00:16:16.432 "target": "spare", 00:16:16.432 "progress": { 00:16:16.432 "blocks": 2560, 00:16:16.432 "percent": 32 00:16:16.432 } 00:16:16.432 }, 00:16:16.432 "base_bdevs_list": [ 00:16:16.432 { 00:16:16.432 "name": "spare", 00:16:16.432 "uuid": "4ec9b151-bfef-59b0-9dce-07a167190a13", 00:16:16.432 "is_configured": true, 00:16:16.432 "data_offset": 256, 00:16:16.432 "data_size": 7936 00:16:16.432 }, 00:16:16.432 { 00:16:16.432 "name": "BaseBdev2", 00:16:16.432 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:16.432 "is_configured": true, 00:16:16.432 "data_offset": 256, 00:16:16.432 "data_size": 7936 00:16:16.432 } 00:16:16.432 ] 00:16:16.432 }' 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 [2024-12-07 16:42:15.108704] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:16.432 [2024-12-07 16:42:15.153406] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:16.432 [2024-12-07 16:42:15.153505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.432 [2024-12-07 16:42:15.153522] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:16.432 [2024-12-07 16:42:15.153533] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.432 "name": "raid_bdev1", 00:16:16.432 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:16.432 "strip_size_kb": 0, 00:16:16.432 "state": "online", 00:16:16.432 "raid_level": "raid1", 00:16:16.432 "superblock": true, 00:16:16.432 "num_base_bdevs": 2, 00:16:16.432 "num_base_bdevs_discovered": 1, 00:16:16.432 "num_base_bdevs_operational": 1, 00:16:16.432 "base_bdevs_list": [ 00:16:16.432 { 00:16:16.432 "name": null, 00:16:16.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.432 "is_configured": false, 00:16:16.432 "data_offset": 0, 00:16:16.432 "data_size": 7936 00:16:16.432 }, 00:16:16.432 { 00:16:16.432 "name": "BaseBdev2", 00:16:16.432 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:16.432 "is_configured": true, 00:16:16.432 "data_offset": 256, 00:16:16.432 "data_size": 7936 00:16:16.432 } 00:16:16.432 ] 00:16:16.432 }' 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.432 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.000 "name": "raid_bdev1", 00:16:17.000 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:17.000 "strip_size_kb": 0, 00:16:17.000 "state": "online", 00:16:17.000 "raid_level": "raid1", 00:16:17.000 "superblock": true, 00:16:17.000 "num_base_bdevs": 2, 00:16:17.000 "num_base_bdevs_discovered": 1, 00:16:17.000 "num_base_bdevs_operational": 1, 00:16:17.000 "base_bdevs_list": [ 00:16:17.000 { 00:16:17.000 "name": null, 00:16:17.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.000 "is_configured": false, 00:16:17.000 "data_offset": 0, 00:16:17.000 "data_size": 7936 00:16:17.000 }, 00:16:17.000 { 00:16:17.000 "name": "BaseBdev2", 00:16:17.000 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:17.000 "is_configured": true, 00:16:17.000 "data_offset": 256, 00:16:17.000 "data_size": 7936 00:16:17.000 } 00:16:17.000 ] 00:16:17.000 }' 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.000 [2024-12-07 16:42:15.784368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:17.000 [2024-12-07 16:42:15.784461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.000 [2024-12-07 16:42:15.784489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:17.000 [2024-12-07 16:42:15.784503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.000 [2024-12-07 16:42:15.785020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.000 [2024-12-07 16:42:15.785050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:17.000 [2024-12-07 16:42:15.785151] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:17.000 [2024-12-07 16:42:15.785179] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:17.000 [2024-12-07 16:42:15.785190] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:17.000 [2024-12-07 16:42:15.785209] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:17.000 BaseBdev1 00:16:17.000 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.001 16:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.938 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.198 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.198 "name": "raid_bdev1", 00:16:18.198 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:18.198 "strip_size_kb": 0, 00:16:18.198 "state": "online", 00:16:18.198 "raid_level": "raid1", 00:16:18.198 "superblock": true, 00:16:18.198 "num_base_bdevs": 2, 00:16:18.198 "num_base_bdevs_discovered": 1, 00:16:18.198 "num_base_bdevs_operational": 1, 00:16:18.198 "base_bdevs_list": [ 00:16:18.198 { 00:16:18.198 "name": null, 00:16:18.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.198 "is_configured": false, 00:16:18.198 "data_offset": 0, 00:16:18.198 "data_size": 7936 00:16:18.198 }, 00:16:18.198 { 00:16:18.198 "name": "BaseBdev2", 00:16:18.198 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:18.198 "is_configured": true, 00:16:18.198 "data_offset": 256, 00:16:18.198 "data_size": 7936 00:16:18.198 } 00:16:18.198 ] 00:16:18.198 }' 00:16:18.198 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.198 16:42:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:18.457 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.457 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.457 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.457 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.457 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.457 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.457 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.457 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.457 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:18.457 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.457 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.457 "name": "raid_bdev1", 00:16:18.457 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:18.457 "strip_size_kb": 0, 00:16:18.457 "state": "online", 00:16:18.457 "raid_level": "raid1", 00:16:18.457 "superblock": true, 00:16:18.457 "num_base_bdevs": 2, 00:16:18.457 "num_base_bdevs_discovered": 1, 00:16:18.457 "num_base_bdevs_operational": 1, 00:16:18.457 "base_bdevs_list": [ 00:16:18.457 { 00:16:18.457 "name": null, 00:16:18.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.457 "is_configured": false, 00:16:18.457 "data_offset": 0, 00:16:18.457 "data_size": 7936 00:16:18.457 }, 00:16:18.457 { 00:16:18.457 "name": "BaseBdev2", 00:16:18.458 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:18.458 "is_configured": true, 00:16:18.458 "data_offset": 256, 00:16:18.458 "data_size": 7936 00:16:18.458 } 00:16:18.458 ] 00:16:18.458 }' 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.458 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:18.717 [2024-12-07 16:42:17.357656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.717 [2024-12-07 16:42:17.357877] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:18.717 [2024-12-07 16:42:17.357896] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:18.717 request: 00:16:18.717 { 00:16:18.717 "base_bdev": "BaseBdev1", 00:16:18.717 "raid_bdev": "raid_bdev1", 00:16:18.717 "method": "bdev_raid_add_base_bdev", 00:16:18.717 "req_id": 1 00:16:18.717 } 00:16:18.717 Got JSON-RPC error response 00:16:18.717 response: 00:16:18.717 { 00:16:18.717 "code": -22, 00:16:18.717 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:18.717 } 00:16:18.717 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:18.717 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:16:18.717 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:18.717 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:18.717 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:18.717 16:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.657 "name": "raid_bdev1", 00:16:19.657 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:19.657 "strip_size_kb": 0, 00:16:19.657 "state": "online", 00:16:19.657 "raid_level": "raid1", 00:16:19.657 "superblock": true, 00:16:19.657 "num_base_bdevs": 2, 00:16:19.657 "num_base_bdevs_discovered": 1, 00:16:19.657 "num_base_bdevs_operational": 1, 00:16:19.657 "base_bdevs_list": [ 00:16:19.657 { 00:16:19.657 "name": null, 00:16:19.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.657 "is_configured": false, 00:16:19.657 "data_offset": 0, 00:16:19.657 "data_size": 7936 00:16:19.657 }, 00:16:19.657 { 00:16:19.657 "name": "BaseBdev2", 00:16:19.657 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:19.657 "is_configured": true, 00:16:19.657 "data_offset": 256, 00:16:19.657 "data_size": 7936 00:16:19.657 } 00:16:19.657 ] 00:16:19.657 }' 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.657 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.226 "name": "raid_bdev1", 00:16:20.226 "uuid": "69055f51-16dd-477f-ba3b-a8e6fb1ffe52", 00:16:20.226 "strip_size_kb": 0, 00:16:20.226 "state": "online", 00:16:20.226 "raid_level": "raid1", 00:16:20.226 "superblock": true, 00:16:20.226 "num_base_bdevs": 2, 00:16:20.226 "num_base_bdevs_discovered": 1, 00:16:20.226 "num_base_bdevs_operational": 1, 00:16:20.226 "base_bdevs_list": [ 00:16:20.226 { 00:16:20.226 "name": null, 00:16:20.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.226 "is_configured": false, 00:16:20.226 "data_offset": 0, 00:16:20.226 "data_size": 7936 00:16:20.226 }, 00:16:20.226 { 00:16:20.226 "name": "BaseBdev2", 00:16:20.226 "uuid": "464bf01b-4034-5557-bcf5-88ee5c30870f", 00:16:20.226 "is_configured": true, 00:16:20.226 "data_offset": 256, 00:16:20.226 "data_size": 7936 00:16:20.226 } 00:16:20.226 ] 00:16:20.226 }' 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 97216 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 97216 ']' 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 97216 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:20.226 16:42:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97216 00:16:20.226 16:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:20.226 16:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:20.226 killing process with pid 97216 00:16:20.226 16:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97216' 00:16:20.226 16:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 97216 00:16:20.226 Received shutdown signal, test time was about 60.000000 seconds 00:16:20.226 00:16:20.226 Latency(us) 00:16:20.226 [2024-12-07T16:42:19.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.226 [2024-12-07T16:42:19.125Z] =================================================================================================================== 00:16:20.226 [2024-12-07T16:42:19.125Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:20.226 [2024-12-07 16:42:19.022804] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:20.226 16:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 97216 00:16:20.226 [2024-12-07 16:42:19.022983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.226 [2024-12-07 16:42:19.023050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.226 [2024-12-07 16:42:19.023061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:20.226 [2024-12-07 16:42:19.082672] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:20.827 16:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:20.827 00:16:20.827 real 0m18.786s 00:16:20.827 user 0m24.782s 00:16:20.827 sys 0m2.799s 00:16:20.827 16:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:20.827 16:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.827 ************************************ 00:16:20.827 END TEST raid_rebuild_test_sb_4k 00:16:20.827 ************************************ 00:16:20.827 16:42:19 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:20.827 16:42:19 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:20.827 16:42:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:20.827 16:42:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.827 16:42:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.827 ************************************ 00:16:20.827 START TEST raid_state_function_test_sb_md_separate 00:16:20.827 ************************************ 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97890 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97890' 00:16:20.827 Process raid pid: 97890 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97890 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97890 ']' 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.827 16:42:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.827 [2024-12-07 16:42:19.614049] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:20.827 [2024-12-07 16:42:19.614199] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.087 [2024-12-07 16:42:19.777841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.087 [2024-12-07 16:42:19.860220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.087 [2024-12-07 16:42:19.938220] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.087 [2024-12-07 16:42:19.938266] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.662 [2024-12-07 16:42:20.470847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:21.662 [2024-12-07 16:42:20.470918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:21.662 [2024-12-07 16:42:20.470932] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:21.662 [2024-12-07 16:42:20.470943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.662 "name": "Existed_Raid", 00:16:21.662 "uuid": "bf5a23c1-318f-442c-8ba0-72a732dc14ad", 00:16:21.662 "strip_size_kb": 0, 00:16:21.662 "state": "configuring", 00:16:21.662 "raid_level": "raid1", 00:16:21.662 "superblock": true, 00:16:21.662 "num_base_bdevs": 2, 00:16:21.662 "num_base_bdevs_discovered": 0, 00:16:21.662 "num_base_bdevs_operational": 2, 00:16:21.662 "base_bdevs_list": [ 00:16:21.662 { 00:16:21.662 "name": "BaseBdev1", 00:16:21.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.662 "is_configured": false, 00:16:21.662 "data_offset": 0, 00:16:21.662 "data_size": 0 00:16:21.662 }, 00:16:21.662 { 00:16:21.662 "name": "BaseBdev2", 00:16:21.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.662 "is_configured": false, 00:16:21.662 "data_offset": 0, 00:16:21.662 "data_size": 0 00:16:21.662 } 00:16:21.662 ] 00:16:21.662 }' 00:16:21.662 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.663 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.233 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:22.233 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.233 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.233 [2024-12-07 16:42:20.898009] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:22.234 [2024-12-07 16:42:20.898072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.234 [2024-12-07 16:42:20.910035] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.234 [2024-12-07 16:42:20.910089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.234 [2024-12-07 16:42:20.910098] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.234 [2024-12-07 16:42:20.910108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.234 [2024-12-07 16:42:20.938950] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.234 BaseBdev1 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.234 [ 00:16:22.234 { 00:16:22.234 "name": "BaseBdev1", 00:16:22.234 "aliases": [ 00:16:22.234 "702e591e-1bc5-4c40-8dda-afc2f21c6a01" 00:16:22.234 ], 00:16:22.234 "product_name": "Malloc disk", 00:16:22.234 "block_size": 4096, 00:16:22.234 "num_blocks": 8192, 00:16:22.234 "uuid": "702e591e-1bc5-4c40-8dda-afc2f21c6a01", 00:16:22.234 "md_size": 32, 00:16:22.234 "md_interleave": false, 00:16:22.234 "dif_type": 0, 00:16:22.234 "assigned_rate_limits": { 00:16:22.234 "rw_ios_per_sec": 0, 00:16:22.234 "rw_mbytes_per_sec": 0, 00:16:22.234 "r_mbytes_per_sec": 0, 00:16:22.234 "w_mbytes_per_sec": 0 00:16:22.234 }, 00:16:22.234 "claimed": true, 00:16:22.234 "claim_type": "exclusive_write", 00:16:22.234 "zoned": false, 00:16:22.234 "supported_io_types": { 00:16:22.234 "read": true, 00:16:22.234 "write": true, 00:16:22.234 "unmap": true, 00:16:22.234 "flush": true, 00:16:22.234 "reset": true, 00:16:22.234 "nvme_admin": false, 00:16:22.234 "nvme_io": false, 00:16:22.234 "nvme_io_md": false, 00:16:22.234 "write_zeroes": true, 00:16:22.234 "zcopy": true, 00:16:22.234 "get_zone_info": false, 00:16:22.234 "zone_management": false, 00:16:22.234 "zone_append": false, 00:16:22.234 "compare": false, 00:16:22.234 "compare_and_write": false, 00:16:22.234 "abort": true, 00:16:22.234 "seek_hole": false, 00:16:22.234 "seek_data": false, 00:16:22.234 "copy": true, 00:16:22.234 "nvme_iov_md": false 00:16:22.234 }, 00:16:22.234 "memory_domains": [ 00:16:22.234 { 00:16:22.234 "dma_device_id": "system", 00:16:22.234 "dma_device_type": 1 00:16:22.234 }, 00:16:22.234 { 00:16:22.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.234 "dma_device_type": 2 00:16:22.234 } 00:16:22.234 ], 00:16:22.234 "driver_specific": {} 00:16:22.234 } 00:16:22.234 ] 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.234 16:42:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.234 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.234 "name": "Existed_Raid", 00:16:22.234 "uuid": "2132a939-eec3-4947-940f-a0d873468282", 00:16:22.234 "strip_size_kb": 0, 00:16:22.234 "state": "configuring", 00:16:22.234 "raid_level": "raid1", 00:16:22.234 "superblock": true, 00:16:22.234 "num_base_bdevs": 2, 00:16:22.234 "num_base_bdevs_discovered": 1, 00:16:22.234 "num_base_bdevs_operational": 2, 00:16:22.234 "base_bdevs_list": [ 00:16:22.234 { 00:16:22.234 "name": "BaseBdev1", 00:16:22.234 "uuid": "702e591e-1bc5-4c40-8dda-afc2f21c6a01", 00:16:22.234 "is_configured": true, 00:16:22.234 "data_offset": 256, 00:16:22.234 "data_size": 7936 00:16:22.234 }, 00:16:22.234 { 00:16:22.234 "name": "BaseBdev2", 00:16:22.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.234 "is_configured": false, 00:16:22.234 "data_offset": 0, 00:16:22.234 "data_size": 0 00:16:22.234 } 00:16:22.234 ] 00:16:22.234 }' 00:16:22.234 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.234 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.804 [2024-12-07 16:42:21.418283] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:22.804 [2024-12-07 16:42:21.418376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.804 [2024-12-07 16:42:21.430322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.804 [2024-12-07 16:42:21.432591] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.804 [2024-12-07 16:42:21.432643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.804 "name": "Existed_Raid", 00:16:22.804 "uuid": "9c1d71af-8fe4-4fa6-aa7f-31d344d2eea4", 00:16:22.804 "strip_size_kb": 0, 00:16:22.804 "state": "configuring", 00:16:22.804 "raid_level": "raid1", 00:16:22.804 "superblock": true, 00:16:22.804 "num_base_bdevs": 2, 00:16:22.804 "num_base_bdevs_discovered": 1, 00:16:22.804 "num_base_bdevs_operational": 2, 00:16:22.804 "base_bdevs_list": [ 00:16:22.804 { 00:16:22.804 "name": "BaseBdev1", 00:16:22.804 "uuid": "702e591e-1bc5-4c40-8dda-afc2f21c6a01", 00:16:22.804 "is_configured": true, 00:16:22.804 "data_offset": 256, 00:16:22.804 "data_size": 7936 00:16:22.804 }, 00:16:22.804 { 00:16:22.804 "name": "BaseBdev2", 00:16:22.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.804 "is_configured": false, 00:16:22.804 "data_offset": 0, 00:16:22.804 "data_size": 0 00:16:22.804 } 00:16:22.804 ] 00:16:22.804 }' 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.804 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.065 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:16:23.065 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.066 [2024-12-07 16:42:21.929358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.066 [2024-12-07 16:42:21.929635] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:23.066 [2024-12-07 16:42:21.929671] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:23.066 [2024-12-07 16:42:21.929798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:23.066 [2024-12-07 16:42:21.929961] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:23.066 [2024-12-07 16:42:21.929994] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:23.066 [2024-12-07 16:42:21.930106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.066 BaseBdev2 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.066 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.066 [ 00:16:23.066 { 00:16:23.066 "name": "BaseBdev2", 00:16:23.066 "aliases": [ 00:16:23.066 "e3ad9946-ff6d-4e0e-88df-9f7889a54b88" 00:16:23.066 ], 00:16:23.066 "product_name": "Malloc disk", 00:16:23.066 "block_size": 4096, 00:16:23.066 "num_blocks": 8192, 00:16:23.066 "uuid": "e3ad9946-ff6d-4e0e-88df-9f7889a54b88", 00:16:23.066 "md_size": 32, 00:16:23.066 "md_interleave": false, 00:16:23.066 "dif_type": 0, 00:16:23.066 "assigned_rate_limits": { 00:16:23.066 "rw_ios_per_sec": 0, 00:16:23.066 "rw_mbytes_per_sec": 0, 00:16:23.066 "r_mbytes_per_sec": 0, 00:16:23.066 "w_mbytes_per_sec": 0 00:16:23.066 }, 00:16:23.066 "claimed": true, 00:16:23.066 "claim_type": "exclusive_write", 00:16:23.066 "zoned": false, 00:16:23.066 "supported_io_types": { 00:16:23.066 "read": true, 00:16:23.327 "write": true, 00:16:23.327 "unmap": true, 00:16:23.327 "flush": true, 00:16:23.327 "reset": true, 00:16:23.327 "nvme_admin": false, 00:16:23.327 "nvme_io": false, 00:16:23.327 "nvme_io_md": false, 00:16:23.327 "write_zeroes": true, 00:16:23.327 "zcopy": true, 00:16:23.327 "get_zone_info": false, 00:16:23.327 "zone_management": false, 00:16:23.327 "zone_append": false, 00:16:23.327 "compare": false, 00:16:23.327 "compare_and_write": false, 00:16:23.327 "abort": true, 00:16:23.327 "seek_hole": false, 00:16:23.327 "seek_data": false, 00:16:23.327 "copy": true, 00:16:23.327 "nvme_iov_md": false 00:16:23.327 }, 00:16:23.327 "memory_domains": [ 00:16:23.327 { 00:16:23.327 "dma_device_id": "system", 00:16:23.327 "dma_device_type": 1 00:16:23.327 }, 00:16:23.327 { 00:16:23.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.327 "dma_device_type": 2 00:16:23.327 } 00:16:23.327 ], 00:16:23.327 "driver_specific": {} 00:16:23.327 } 00:16:23.327 ] 00:16:23.327 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.327 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:16:23.327 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:23.327 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:23.327 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:23.327 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.327 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.327 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.327 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.327 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.328 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.328 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.328 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.328 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.328 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.328 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.328 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.328 16:42:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.328 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.328 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.328 "name": "Existed_Raid", 00:16:23.328 "uuid": "9c1d71af-8fe4-4fa6-aa7f-31d344d2eea4", 00:16:23.328 "strip_size_kb": 0, 00:16:23.328 "state": "online", 00:16:23.328 "raid_level": "raid1", 00:16:23.328 "superblock": true, 00:16:23.328 "num_base_bdevs": 2, 00:16:23.328 "num_base_bdevs_discovered": 2, 00:16:23.328 "num_base_bdevs_operational": 2, 00:16:23.328 "base_bdevs_list": [ 00:16:23.328 { 00:16:23.328 "name": "BaseBdev1", 00:16:23.328 "uuid": "702e591e-1bc5-4c40-8dda-afc2f21c6a01", 00:16:23.328 "is_configured": true, 00:16:23.328 "data_offset": 256, 00:16:23.328 "data_size": 7936 00:16:23.328 }, 00:16:23.328 { 00:16:23.328 "name": "BaseBdev2", 00:16:23.328 "uuid": "e3ad9946-ff6d-4e0e-88df-9f7889a54b88", 00:16:23.328 "is_configured": true, 00:16:23.328 "data_offset": 256, 00:16:23.328 "data_size": 7936 00:16:23.328 } 00:16:23.328 ] 00:16:23.328 }' 00:16:23.328 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.328 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.588 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:23.588 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:23.588 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:23.588 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:23.588 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:23.588 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:23.588 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:23.588 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.588 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.588 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:23.588 [2024-12-07 16:42:22.392973] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.588 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.588 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:23.588 "name": "Existed_Raid", 00:16:23.588 "aliases": [ 00:16:23.588 "9c1d71af-8fe4-4fa6-aa7f-31d344d2eea4" 00:16:23.588 ], 00:16:23.588 "product_name": "Raid Volume", 00:16:23.588 "block_size": 4096, 00:16:23.588 "num_blocks": 7936, 00:16:23.588 "uuid": "9c1d71af-8fe4-4fa6-aa7f-31d344d2eea4", 00:16:23.588 "md_size": 32, 00:16:23.588 "md_interleave": false, 00:16:23.588 "dif_type": 0, 00:16:23.588 "assigned_rate_limits": { 00:16:23.588 "rw_ios_per_sec": 0, 00:16:23.588 "rw_mbytes_per_sec": 0, 00:16:23.588 "r_mbytes_per_sec": 0, 00:16:23.588 "w_mbytes_per_sec": 0 00:16:23.588 }, 00:16:23.588 "claimed": false, 00:16:23.588 "zoned": false, 00:16:23.588 "supported_io_types": { 00:16:23.588 "read": true, 00:16:23.588 "write": true, 00:16:23.588 "unmap": false, 00:16:23.588 "flush": false, 00:16:23.588 "reset": true, 00:16:23.588 "nvme_admin": false, 00:16:23.588 "nvme_io": false, 00:16:23.588 "nvme_io_md": false, 00:16:23.588 "write_zeroes": true, 00:16:23.588 "zcopy": false, 00:16:23.588 "get_zone_info": false, 00:16:23.588 "zone_management": false, 00:16:23.589 "zone_append": false, 00:16:23.589 "compare": false, 00:16:23.589 "compare_and_write": false, 00:16:23.589 "abort": false, 00:16:23.589 "seek_hole": false, 00:16:23.589 "seek_data": false, 00:16:23.589 "copy": false, 00:16:23.589 "nvme_iov_md": false 00:16:23.589 }, 00:16:23.589 "memory_domains": [ 00:16:23.589 { 00:16:23.589 "dma_device_id": "system", 00:16:23.589 "dma_device_type": 1 00:16:23.589 }, 00:16:23.589 { 00:16:23.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.589 "dma_device_type": 2 00:16:23.589 }, 00:16:23.589 { 00:16:23.589 "dma_device_id": "system", 00:16:23.589 "dma_device_type": 1 00:16:23.589 }, 00:16:23.589 { 00:16:23.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.589 "dma_device_type": 2 00:16:23.589 } 00:16:23.589 ], 00:16:23.589 "driver_specific": { 00:16:23.589 "raid": { 00:16:23.589 "uuid": "9c1d71af-8fe4-4fa6-aa7f-31d344d2eea4", 00:16:23.589 "strip_size_kb": 0, 00:16:23.589 "state": "online", 00:16:23.589 "raid_level": "raid1", 00:16:23.589 "superblock": true, 00:16:23.589 "num_base_bdevs": 2, 00:16:23.589 "num_base_bdevs_discovered": 2, 00:16:23.589 "num_base_bdevs_operational": 2, 00:16:23.589 "base_bdevs_list": [ 00:16:23.589 { 00:16:23.589 "name": "BaseBdev1", 00:16:23.589 "uuid": "702e591e-1bc5-4c40-8dda-afc2f21c6a01", 00:16:23.589 "is_configured": true, 00:16:23.589 "data_offset": 256, 00:16:23.589 "data_size": 7936 00:16:23.589 }, 00:16:23.589 { 00:16:23.589 "name": "BaseBdev2", 00:16:23.589 "uuid": "e3ad9946-ff6d-4e0e-88df-9f7889a54b88", 00:16:23.589 "is_configured": true, 00:16:23.589 "data_offset": 256, 00:16:23.589 "data_size": 7936 00:16:23.589 } 00:16:23.589 ] 00:16:23.589 } 00:16:23.589 } 00:16:23.589 }' 00:16:23.589 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:23.589 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:23.589 BaseBdev2' 00:16:23.589 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.849 [2024-12-07 16:42:22.608406] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.849 "name": "Existed_Raid", 00:16:23.849 "uuid": "9c1d71af-8fe4-4fa6-aa7f-31d344d2eea4", 00:16:23.849 "strip_size_kb": 0, 00:16:23.849 "state": "online", 00:16:23.849 "raid_level": "raid1", 00:16:23.849 "superblock": true, 00:16:23.849 "num_base_bdevs": 2, 00:16:23.849 "num_base_bdevs_discovered": 1, 00:16:23.849 "num_base_bdevs_operational": 1, 00:16:23.849 "base_bdevs_list": [ 00:16:23.849 { 00:16:23.849 "name": null, 00:16:23.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.849 "is_configured": false, 00:16:23.849 "data_offset": 0, 00:16:23.849 "data_size": 7936 00:16:23.849 }, 00:16:23.849 { 00:16:23.849 "name": "BaseBdev2", 00:16:23.849 "uuid": "e3ad9946-ff6d-4e0e-88df-9f7889a54b88", 00:16:23.849 "is_configured": true, 00:16:23.849 "data_offset": 256, 00:16:23.849 "data_size": 7936 00:16:23.849 } 00:16:23.849 ] 00:16:23.849 }' 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.849 16:42:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.419 [2024-12-07 16:42:23.169801] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:24.419 [2024-12-07 16:42:23.169943] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.419 [2024-12-07 16:42:23.192469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.419 [2024-12-07 16:42:23.192538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.419 [2024-12-07 16:42:23.192555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97890 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97890 ']' 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97890 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97890 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:24.419 killing process with pid 97890 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97890' 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97890 00:16:24.419 [2024-12-07 16:42:23.284877] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.419 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97890 00:16:24.419 [2024-12-07 16:42:23.286553] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.989 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:16:24.989 00:16:24.989 real 0m4.145s 00:16:24.989 user 0m6.274s 00:16:24.989 sys 0m0.973s 00:16:24.989 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.989 16:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.989 ************************************ 00:16:24.989 END TEST raid_state_function_test_sb_md_separate 00:16:24.989 ************************************ 00:16:24.989 16:42:23 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:16:24.989 16:42:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:24.989 16:42:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.989 16:42:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.989 ************************************ 00:16:24.989 START TEST raid_superblock_test_md_separate 00:16:24.989 ************************************ 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=98131 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 98131 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98131 ']' 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:24.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:24.989 16:42:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.989 [2024-12-07 16:42:23.840534] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:24.989 [2024-12-07 16:42:23.840709] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98131 ] 00:16:25.249 [2024-12-07 16:42:23.989611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.249 [2024-12-07 16:42:24.071675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.508 [2024-12-07 16:42:24.150462] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.508 [2024-12-07 16:42:24.150510] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.078 malloc1 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:26.078 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.079 [2024-12-07 16:42:24.720694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:26.079 [2024-12-07 16:42:24.720792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.079 [2024-12-07 16:42:24.720820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:26.079 [2024-12-07 16:42:24.720842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.079 [2024-12-07 16:42:24.723152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.079 [2024-12-07 16:42:24.723198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:26.079 pt1 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.079 malloc2 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.079 [2024-12-07 16:42:24.768805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:26.079 [2024-12-07 16:42:24.768907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.079 [2024-12-07 16:42:24.768930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:26.079 [2024-12-07 16:42:24.768942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.079 [2024-12-07 16:42:24.771214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.079 [2024-12-07 16:42:24.771253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:26.079 pt2 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.079 [2024-12-07 16:42:24.780838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:26.079 [2024-12-07 16:42:24.783022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:26.079 [2024-12-07 16:42:24.783200] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:26.079 [2024-12-07 16:42:24.783231] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:26.079 [2024-12-07 16:42:24.783371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:26.079 [2024-12-07 16:42:24.783503] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:26.079 [2024-12-07 16:42:24.783519] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:26.079 [2024-12-07 16:42:24.783653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.079 "name": "raid_bdev1", 00:16:26.079 "uuid": "3f879521-8d7b-494f-bcd3-69d22451ac88", 00:16:26.079 "strip_size_kb": 0, 00:16:26.079 "state": "online", 00:16:26.079 "raid_level": "raid1", 00:16:26.079 "superblock": true, 00:16:26.079 "num_base_bdevs": 2, 00:16:26.079 "num_base_bdevs_discovered": 2, 00:16:26.079 "num_base_bdevs_operational": 2, 00:16:26.079 "base_bdevs_list": [ 00:16:26.079 { 00:16:26.079 "name": "pt1", 00:16:26.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.079 "is_configured": true, 00:16:26.079 "data_offset": 256, 00:16:26.079 "data_size": 7936 00:16:26.079 }, 00:16:26.079 { 00:16:26.079 "name": "pt2", 00:16:26.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.079 "is_configured": true, 00:16:26.079 "data_offset": 256, 00:16:26.079 "data_size": 7936 00:16:26.079 } 00:16:26.079 ] 00:16:26.079 }' 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.079 16:42:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.339 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:26.339 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:26.339 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:26.339 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:26.339 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:26.339 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:26.339 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:26.339 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:26.339 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.339 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.599 [2024-12-07 16:42:25.240431] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.599 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.599 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:26.599 "name": "raid_bdev1", 00:16:26.599 "aliases": [ 00:16:26.599 "3f879521-8d7b-494f-bcd3-69d22451ac88" 00:16:26.599 ], 00:16:26.599 "product_name": "Raid Volume", 00:16:26.599 "block_size": 4096, 00:16:26.599 "num_blocks": 7936, 00:16:26.599 "uuid": "3f879521-8d7b-494f-bcd3-69d22451ac88", 00:16:26.599 "md_size": 32, 00:16:26.599 "md_interleave": false, 00:16:26.599 "dif_type": 0, 00:16:26.599 "assigned_rate_limits": { 00:16:26.599 "rw_ios_per_sec": 0, 00:16:26.599 "rw_mbytes_per_sec": 0, 00:16:26.599 "r_mbytes_per_sec": 0, 00:16:26.599 "w_mbytes_per_sec": 0 00:16:26.599 }, 00:16:26.599 "claimed": false, 00:16:26.599 "zoned": false, 00:16:26.599 "supported_io_types": { 00:16:26.599 "read": true, 00:16:26.599 "write": true, 00:16:26.599 "unmap": false, 00:16:26.599 "flush": false, 00:16:26.599 "reset": true, 00:16:26.599 "nvme_admin": false, 00:16:26.599 "nvme_io": false, 00:16:26.599 "nvme_io_md": false, 00:16:26.599 "write_zeroes": true, 00:16:26.600 "zcopy": false, 00:16:26.600 "get_zone_info": false, 00:16:26.600 "zone_management": false, 00:16:26.600 "zone_append": false, 00:16:26.600 "compare": false, 00:16:26.600 "compare_and_write": false, 00:16:26.600 "abort": false, 00:16:26.600 "seek_hole": false, 00:16:26.600 "seek_data": false, 00:16:26.600 "copy": false, 00:16:26.600 "nvme_iov_md": false 00:16:26.600 }, 00:16:26.600 "memory_domains": [ 00:16:26.600 { 00:16:26.600 "dma_device_id": "system", 00:16:26.600 "dma_device_type": 1 00:16:26.600 }, 00:16:26.600 { 00:16:26.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.600 "dma_device_type": 2 00:16:26.600 }, 00:16:26.600 { 00:16:26.600 "dma_device_id": "system", 00:16:26.600 "dma_device_type": 1 00:16:26.600 }, 00:16:26.600 { 00:16:26.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.600 "dma_device_type": 2 00:16:26.600 } 00:16:26.600 ], 00:16:26.600 "driver_specific": { 00:16:26.600 "raid": { 00:16:26.600 "uuid": "3f879521-8d7b-494f-bcd3-69d22451ac88", 00:16:26.600 "strip_size_kb": 0, 00:16:26.600 "state": "online", 00:16:26.600 "raid_level": "raid1", 00:16:26.600 "superblock": true, 00:16:26.600 "num_base_bdevs": 2, 00:16:26.600 "num_base_bdevs_discovered": 2, 00:16:26.600 "num_base_bdevs_operational": 2, 00:16:26.600 "base_bdevs_list": [ 00:16:26.600 { 00:16:26.600 "name": "pt1", 00:16:26.600 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.600 "is_configured": true, 00:16:26.600 "data_offset": 256, 00:16:26.600 "data_size": 7936 00:16:26.600 }, 00:16:26.600 { 00:16:26.600 "name": "pt2", 00:16:26.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.600 "is_configured": true, 00:16:26.600 "data_offset": 256, 00:16:26.600 "data_size": 7936 00:16:26.600 } 00:16:26.600 ] 00:16:26.600 } 00:16:26.600 } 00:16:26.600 }' 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:26.600 pt2' 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.600 [2024-12-07 16:42:25.439954] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3f879521-8d7b-494f-bcd3-69d22451ac88 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 3f879521-8d7b-494f-bcd3-69d22451ac88 ']' 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.600 [2024-12-07 16:42:25.471641] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.600 [2024-12-07 16:42:25.471685] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.600 [2024-12-07 16:42:25.471818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.600 [2024-12-07 16:42:25.471896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.600 [2024-12-07 16:42:25.471909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.600 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.859 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.859 [2024-12-07 16:42:25.599586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:26.860 [2024-12-07 16:42:25.601784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:26.860 [2024-12-07 16:42:25.601868] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:26.860 [2024-12-07 16:42:25.601937] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:26.860 [2024-12-07 16:42:25.601960] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.860 [2024-12-07 16:42:25.601970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:26.860 request: 00:16:26.860 { 00:16:26.860 "name": "raid_bdev1", 00:16:26.860 "raid_level": "raid1", 00:16:26.860 "base_bdevs": [ 00:16:26.860 "malloc1", 00:16:26.860 "malloc2" 00:16:26.860 ], 00:16:26.860 "superblock": false, 00:16:26.860 "method": "bdev_raid_create", 00:16:26.860 "req_id": 1 00:16:26.860 } 00:16:26.860 Got JSON-RPC error response 00:16:26.860 response: 00:16:26.860 { 00:16:26.860 "code": -17, 00:16:26.860 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:26.860 } 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.860 [2024-12-07 16:42:25.667488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:26.860 [2024-12-07 16:42:25.667569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.860 [2024-12-07 16:42:25.667595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:26.860 [2024-12-07 16:42:25.667604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.860 [2024-12-07 16:42:25.669937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.860 [2024-12-07 16:42:25.669979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:26.860 [2024-12-07 16:42:25.670051] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:26.860 [2024-12-07 16:42:25.670108] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:26.860 pt1 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.860 "name": "raid_bdev1", 00:16:26.860 "uuid": "3f879521-8d7b-494f-bcd3-69d22451ac88", 00:16:26.860 "strip_size_kb": 0, 00:16:26.860 "state": "configuring", 00:16:26.860 "raid_level": "raid1", 00:16:26.860 "superblock": true, 00:16:26.860 "num_base_bdevs": 2, 00:16:26.860 "num_base_bdevs_discovered": 1, 00:16:26.860 "num_base_bdevs_operational": 2, 00:16:26.860 "base_bdevs_list": [ 00:16:26.860 { 00:16:26.860 "name": "pt1", 00:16:26.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.860 "is_configured": true, 00:16:26.860 "data_offset": 256, 00:16:26.860 "data_size": 7936 00:16:26.860 }, 00:16:26.860 { 00:16:26.860 "name": null, 00:16:26.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.860 "is_configured": false, 00:16:26.860 "data_offset": 256, 00:16:26.860 "data_size": 7936 00:16:26.860 } 00:16:26.860 ] 00:16:26.860 }' 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.860 16:42:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.428 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:27.428 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:27.428 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:27.428 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:27.428 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.428 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.428 [2024-12-07 16:42:26.134717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:27.428 [2024-12-07 16:42:26.134810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.428 [2024-12-07 16:42:26.134841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:27.428 [2024-12-07 16:42:26.134852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.428 [2024-12-07 16:42:26.135132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.428 [2024-12-07 16:42:26.135152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:27.429 [2024-12-07 16:42:26.135222] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:27.429 [2024-12-07 16:42:26.135247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:27.429 [2024-12-07 16:42:26.135359] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:27.429 [2024-12-07 16:42:26.135371] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:27.429 [2024-12-07 16:42:26.135472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:27.429 [2024-12-07 16:42:26.135567] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:27.429 [2024-12-07 16:42:26.135585] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:27.429 [2024-12-07 16:42:26.135665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.429 pt2 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.429 "name": "raid_bdev1", 00:16:27.429 "uuid": "3f879521-8d7b-494f-bcd3-69d22451ac88", 00:16:27.429 "strip_size_kb": 0, 00:16:27.429 "state": "online", 00:16:27.429 "raid_level": "raid1", 00:16:27.429 "superblock": true, 00:16:27.429 "num_base_bdevs": 2, 00:16:27.429 "num_base_bdevs_discovered": 2, 00:16:27.429 "num_base_bdevs_operational": 2, 00:16:27.429 "base_bdevs_list": [ 00:16:27.429 { 00:16:27.429 "name": "pt1", 00:16:27.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.429 "is_configured": true, 00:16:27.429 "data_offset": 256, 00:16:27.429 "data_size": 7936 00:16:27.429 }, 00:16:27.429 { 00:16:27.429 "name": "pt2", 00:16:27.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.429 "is_configured": true, 00:16:27.429 "data_offset": 256, 00:16:27.429 "data_size": 7936 00:16:27.429 } 00:16:27.429 ] 00:16:27.429 }' 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.429 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 [2024-12-07 16:42:26.486437] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.689 "name": "raid_bdev1", 00:16:27.689 "aliases": [ 00:16:27.689 "3f879521-8d7b-494f-bcd3-69d22451ac88" 00:16:27.689 ], 00:16:27.689 "product_name": "Raid Volume", 00:16:27.689 "block_size": 4096, 00:16:27.689 "num_blocks": 7936, 00:16:27.689 "uuid": "3f879521-8d7b-494f-bcd3-69d22451ac88", 00:16:27.689 "md_size": 32, 00:16:27.689 "md_interleave": false, 00:16:27.689 "dif_type": 0, 00:16:27.689 "assigned_rate_limits": { 00:16:27.689 "rw_ios_per_sec": 0, 00:16:27.689 "rw_mbytes_per_sec": 0, 00:16:27.689 "r_mbytes_per_sec": 0, 00:16:27.689 "w_mbytes_per_sec": 0 00:16:27.689 }, 00:16:27.689 "claimed": false, 00:16:27.689 "zoned": false, 00:16:27.689 "supported_io_types": { 00:16:27.689 "read": true, 00:16:27.689 "write": true, 00:16:27.689 "unmap": false, 00:16:27.689 "flush": false, 00:16:27.689 "reset": true, 00:16:27.689 "nvme_admin": false, 00:16:27.689 "nvme_io": false, 00:16:27.689 "nvme_io_md": false, 00:16:27.689 "write_zeroes": true, 00:16:27.689 "zcopy": false, 00:16:27.689 "get_zone_info": false, 00:16:27.689 "zone_management": false, 00:16:27.689 "zone_append": false, 00:16:27.689 "compare": false, 00:16:27.689 "compare_and_write": false, 00:16:27.689 "abort": false, 00:16:27.689 "seek_hole": false, 00:16:27.689 "seek_data": false, 00:16:27.689 "copy": false, 00:16:27.689 "nvme_iov_md": false 00:16:27.689 }, 00:16:27.689 "memory_domains": [ 00:16:27.689 { 00:16:27.689 "dma_device_id": "system", 00:16:27.689 "dma_device_type": 1 00:16:27.689 }, 00:16:27.689 { 00:16:27.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.689 "dma_device_type": 2 00:16:27.689 }, 00:16:27.689 { 00:16:27.689 "dma_device_id": "system", 00:16:27.689 "dma_device_type": 1 00:16:27.689 }, 00:16:27.689 { 00:16:27.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.689 "dma_device_type": 2 00:16:27.689 } 00:16:27.689 ], 00:16:27.689 "driver_specific": { 00:16:27.689 "raid": { 00:16:27.689 "uuid": "3f879521-8d7b-494f-bcd3-69d22451ac88", 00:16:27.689 "strip_size_kb": 0, 00:16:27.689 "state": "online", 00:16:27.689 "raid_level": "raid1", 00:16:27.689 "superblock": true, 00:16:27.689 "num_base_bdevs": 2, 00:16:27.689 "num_base_bdevs_discovered": 2, 00:16:27.689 "num_base_bdevs_operational": 2, 00:16:27.689 "base_bdevs_list": [ 00:16:27.689 { 00:16:27.689 "name": "pt1", 00:16:27.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.689 "is_configured": true, 00:16:27.689 "data_offset": 256, 00:16:27.689 "data_size": 7936 00:16:27.689 }, 00:16:27.689 { 00:16:27.689 "name": "pt2", 00:16:27.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.689 "is_configured": true, 00:16:27.689 "data_offset": 256, 00:16:27.689 "data_size": 7936 00:16:27.689 } 00:16:27.689 ] 00:16:27.689 } 00:16:27.689 } 00:16:27.689 }' 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:27.689 pt2' 00:16:27.689 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.949 [2024-12-07 16:42:26.714016] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 3f879521-8d7b-494f-bcd3-69d22451ac88 '!=' 3f879521-8d7b-494f-bcd3-69d22451ac88 ']' 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.949 [2024-12-07 16:42:26.757700] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.949 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.949 "name": "raid_bdev1", 00:16:27.949 "uuid": "3f879521-8d7b-494f-bcd3-69d22451ac88", 00:16:27.949 "strip_size_kb": 0, 00:16:27.949 "state": "online", 00:16:27.949 "raid_level": "raid1", 00:16:27.949 "superblock": true, 00:16:27.949 "num_base_bdevs": 2, 00:16:27.949 "num_base_bdevs_discovered": 1, 00:16:27.949 "num_base_bdevs_operational": 1, 00:16:27.949 "base_bdevs_list": [ 00:16:27.949 { 00:16:27.949 "name": null, 00:16:27.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.949 "is_configured": false, 00:16:27.949 "data_offset": 0, 00:16:27.949 "data_size": 7936 00:16:27.949 }, 00:16:27.949 { 00:16:27.949 "name": "pt2", 00:16:27.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.950 "is_configured": true, 00:16:27.950 "data_offset": 256, 00:16:27.950 "data_size": 7936 00:16:27.950 } 00:16:27.950 ] 00:16:27.950 }' 00:16:27.950 16:42:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.950 16:42:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.518 [2024-12-07 16:42:27.216865] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:28.518 [2024-12-07 16:42:27.216909] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.518 [2024-12-07 16:42:27.217016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.518 [2024-12-07 16:42:27.217080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.518 [2024-12-07 16:42:27.217090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.518 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.518 [2024-12-07 16:42:27.292724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.518 [2024-12-07 16:42:27.292802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.518 [2024-12-07 16:42:27.292826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:28.518 [2024-12-07 16:42:27.292837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.518 [2024-12-07 16:42:27.295228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.518 [2024-12-07 16:42:27.295268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.518 [2024-12-07 16:42:27.295354] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:28.519 [2024-12-07 16:42:27.295393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.519 [2024-12-07 16:42:27.295485] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:28.519 [2024-12-07 16:42:27.295493] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:28.519 [2024-12-07 16:42:27.295582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:28.519 [2024-12-07 16:42:27.295668] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:28.519 [2024-12-07 16:42:27.295686] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:28.519 [2024-12-07 16:42:27.295767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.519 pt2 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.519 "name": "raid_bdev1", 00:16:28.519 "uuid": "3f879521-8d7b-494f-bcd3-69d22451ac88", 00:16:28.519 "strip_size_kb": 0, 00:16:28.519 "state": "online", 00:16:28.519 "raid_level": "raid1", 00:16:28.519 "superblock": true, 00:16:28.519 "num_base_bdevs": 2, 00:16:28.519 "num_base_bdevs_discovered": 1, 00:16:28.519 "num_base_bdevs_operational": 1, 00:16:28.519 "base_bdevs_list": [ 00:16:28.519 { 00:16:28.519 "name": null, 00:16:28.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.519 "is_configured": false, 00:16:28.519 "data_offset": 256, 00:16:28.519 "data_size": 7936 00:16:28.519 }, 00:16:28.519 { 00:16:28.519 "name": "pt2", 00:16:28.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.519 "is_configured": true, 00:16:28.519 "data_offset": 256, 00:16:28.519 "data_size": 7936 00:16:28.519 } 00:16:28.519 ] 00:16:28.519 }' 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.519 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.088 [2024-12-07 16:42:27.748058] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.088 [2024-12-07 16:42:27.748100] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.088 [2024-12-07 16:42:27.748230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.088 [2024-12-07 16:42:27.748295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.088 [2024-12-07 16:42:27.748309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.088 [2024-12-07 16:42:27.811947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:29.088 [2024-12-07 16:42:27.812053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.088 [2024-12-07 16:42:27.812080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:29.088 [2024-12-07 16:42:27.812098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.088 [2024-12-07 16:42:27.814515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.088 [2024-12-07 16:42:27.814558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:29.088 [2024-12-07 16:42:27.814631] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:29.088 [2024-12-07 16:42:27.814681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:29.088 [2024-12-07 16:42:27.814801] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:29.088 [2024-12-07 16:42:27.814824] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.088 [2024-12-07 16:42:27.814851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:29.088 [2024-12-07 16:42:27.814895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.088 [2024-12-07 16:42:27.814965] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:29.088 [2024-12-07 16:42:27.814985] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:29.088 [2024-12-07 16:42:27.815072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:29.088 [2024-12-07 16:42:27.815161] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:29.088 [2024-12-07 16:42:27.815168] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:29.088 [2024-12-07 16:42:27.815254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.088 pt1 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.088 "name": "raid_bdev1", 00:16:29.088 "uuid": "3f879521-8d7b-494f-bcd3-69d22451ac88", 00:16:29.088 "strip_size_kb": 0, 00:16:29.088 "state": "online", 00:16:29.088 "raid_level": "raid1", 00:16:29.088 "superblock": true, 00:16:29.088 "num_base_bdevs": 2, 00:16:29.088 "num_base_bdevs_discovered": 1, 00:16:29.088 "num_base_bdevs_operational": 1, 00:16:29.088 "base_bdevs_list": [ 00:16:29.088 { 00:16:29.088 "name": null, 00:16:29.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.088 "is_configured": false, 00:16:29.088 "data_offset": 256, 00:16:29.088 "data_size": 7936 00:16:29.088 }, 00:16:29.088 { 00:16:29.088 "name": "pt2", 00:16:29.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.088 "is_configured": true, 00:16:29.088 "data_offset": 256, 00:16:29.088 "data_size": 7936 00:16:29.088 } 00:16:29.088 ] 00:16:29.088 }' 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.088 16:42:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.347 16:42:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:29.347 16:42:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:29.347 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.347 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.347 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.347 16:42:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:29.347 16:42:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:29.347 16:42:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.348 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.348 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.348 [2024-12-07 16:42:28.219534] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.348 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.348 16:42:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 3f879521-8d7b-494f-bcd3-69d22451ac88 '!=' 3f879521-8d7b-494f-bcd3-69d22451ac88 ']' 00:16:29.348 16:42:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 98131 00:16:29.348 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98131 ']' 00:16:29.348 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 98131 00:16:29.348 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:29.607 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:29.607 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98131 00:16:29.607 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:29.607 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:29.607 killing process with pid 98131 00:16:29.607 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98131' 00:16:29.607 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 98131 00:16:29.607 [2024-12-07 16:42:28.272301] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:29.607 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 98131 00:16:29.607 [2024-12-07 16:42:28.272452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.607 [2024-12-07 16:42:28.272515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.607 [2024-12-07 16:42:28.272526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:29.607 [2024-12-07 16:42:28.318157] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.867 16:42:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:29.867 00:16:29.867 real 0m4.950s 00:16:29.867 user 0m7.780s 00:16:29.867 sys 0m1.197s 00:16:29.867 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:29.867 16:42:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.867 ************************************ 00:16:29.867 END TEST raid_superblock_test_md_separate 00:16:29.867 ************************************ 00:16:29.867 16:42:28 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:29.867 16:42:28 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:29.867 16:42:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:29.867 16:42:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:29.867 16:42:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:30.127 ************************************ 00:16:30.127 START TEST raid_rebuild_test_sb_md_separate 00:16:30.127 ************************************ 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98447 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98447 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98447 ']' 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:30.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:30.127 16:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.127 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:30.127 Zero copy mechanism will not be used. 00:16:30.127 [2024-12-07 16:42:28.869176] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:30.127 [2024-12-07 16:42:28.869360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98447 ] 00:16:30.387 [2024-12-07 16:42:29.033817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.387 [2024-12-07 16:42:29.115257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.387 [2024-12-07 16:42:29.193009] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.387 [2024-12-07 16:42:29.193055] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.957 BaseBdev1_malloc 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.957 [2024-12-07 16:42:29.737982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:30.957 [2024-12-07 16:42:29.738055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.957 [2024-12-07 16:42:29.738090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:30.957 [2024-12-07 16:42:29.738100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.957 [2024-12-07 16:42:29.740485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.957 [2024-12-07 16:42:29.740523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:30.957 BaseBdev1 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.957 BaseBdev2_malloc 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.957 [2024-12-07 16:42:29.782160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:30.957 [2024-12-07 16:42:29.782232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.957 [2024-12-07 16:42:29.782260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:30.957 [2024-12-07 16:42:29.782269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.957 [2024-12-07 16:42:29.784687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.957 [2024-12-07 16:42:29.784726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:30.957 BaseBdev2 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.957 spare_malloc 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.957 spare_delay 00:16:30.957 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.958 [2024-12-07 16:42:29.830546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:30.958 [2024-12-07 16:42:29.830623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.958 [2024-12-07 16:42:29.830653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:30.958 [2024-12-07 16:42:29.830666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.958 [2024-12-07 16:42:29.833065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.958 [2024-12-07 16:42:29.833101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:30.958 spare 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.958 [2024-12-07 16:42:29.842547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.958 [2024-12-07 16:42:29.844765] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.958 [2024-12-07 16:42:29.845000] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:30.958 [2024-12-07 16:42:29.845017] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:30.958 [2024-12-07 16:42:29.845119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:30.958 [2024-12-07 16:42:29.845241] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:30.958 [2024-12-07 16:42:29.845257] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:30.958 [2024-12-07 16:42:29.845379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.958 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.217 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.217 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.217 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.217 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.217 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.217 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.217 "name": "raid_bdev1", 00:16:31.217 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:31.217 "strip_size_kb": 0, 00:16:31.217 "state": "online", 00:16:31.217 "raid_level": "raid1", 00:16:31.217 "superblock": true, 00:16:31.217 "num_base_bdevs": 2, 00:16:31.217 "num_base_bdevs_discovered": 2, 00:16:31.217 "num_base_bdevs_operational": 2, 00:16:31.217 "base_bdevs_list": [ 00:16:31.217 { 00:16:31.217 "name": "BaseBdev1", 00:16:31.217 "uuid": "7df708a9-1b65-52fd-94e2-213a6337c7ce", 00:16:31.217 "is_configured": true, 00:16:31.217 "data_offset": 256, 00:16:31.217 "data_size": 7936 00:16:31.217 }, 00:16:31.217 { 00:16:31.217 "name": "BaseBdev2", 00:16:31.217 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:31.217 "is_configured": true, 00:16:31.217 "data_offset": 256, 00:16:31.217 "data_size": 7936 00:16:31.217 } 00:16:31.217 ] 00:16:31.217 }' 00:16:31.217 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.217 16:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.476 [2024-12-07 16:42:30.282178] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:31.476 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:31.477 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.477 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:31.736 [2024-12-07 16:42:30.549664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:31.736 /dev/nbd0 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.736 1+0 records in 00:16:31.736 1+0 records out 00:16:31.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429994 s, 9.5 MB/s 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:31.736 16:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:32.305 7936+0 records in 00:16:32.305 7936+0 records out 00:16:32.305 32505856 bytes (33 MB, 31 MiB) copied, 0.537569 s, 60.5 MB/s 00:16:32.305 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:32.305 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.305 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:32.305 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:32.305 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:32.305 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.305 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:32.566 [2024-12-07 16:42:31.353113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.566 [2024-12-07 16:42:31.389181] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.566 "name": "raid_bdev1", 00:16:32.566 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:32.566 "strip_size_kb": 0, 00:16:32.566 "state": "online", 00:16:32.566 "raid_level": "raid1", 00:16:32.566 "superblock": true, 00:16:32.566 "num_base_bdevs": 2, 00:16:32.566 "num_base_bdevs_discovered": 1, 00:16:32.566 "num_base_bdevs_operational": 1, 00:16:32.566 "base_bdevs_list": [ 00:16:32.566 { 00:16:32.566 "name": null, 00:16:32.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.566 "is_configured": false, 00:16:32.566 "data_offset": 0, 00:16:32.566 "data_size": 7936 00:16:32.566 }, 00:16:32.566 { 00:16:32.566 "name": "BaseBdev2", 00:16:32.566 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:32.566 "is_configured": true, 00:16:32.566 "data_offset": 256, 00:16:32.566 "data_size": 7936 00:16:32.566 } 00:16:32.566 ] 00:16:32.566 }' 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.566 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.136 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:33.136 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.136 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.136 [2024-12-07 16:42:31.804530] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.136 [2024-12-07 16:42:31.807558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:16:33.136 [2024-12-07 16:42:31.809887] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:33.136 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.136 16:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:34.074 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.074 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.075 "name": "raid_bdev1", 00:16:34.075 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:34.075 "strip_size_kb": 0, 00:16:34.075 "state": "online", 00:16:34.075 "raid_level": "raid1", 00:16:34.075 "superblock": true, 00:16:34.075 "num_base_bdevs": 2, 00:16:34.075 "num_base_bdevs_discovered": 2, 00:16:34.075 "num_base_bdevs_operational": 2, 00:16:34.075 "process": { 00:16:34.075 "type": "rebuild", 00:16:34.075 "target": "spare", 00:16:34.075 "progress": { 00:16:34.075 "blocks": 2560, 00:16:34.075 "percent": 32 00:16:34.075 } 00:16:34.075 }, 00:16:34.075 "base_bdevs_list": [ 00:16:34.075 { 00:16:34.075 "name": "spare", 00:16:34.075 "uuid": "54cbad79-f0f0-5e4b-866d-07037a4a2fc7", 00:16:34.075 "is_configured": true, 00:16:34.075 "data_offset": 256, 00:16:34.075 "data_size": 7936 00:16:34.075 }, 00:16:34.075 { 00:16:34.075 "name": "BaseBdev2", 00:16:34.075 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:34.075 "is_configured": true, 00:16:34.075 "data_offset": 256, 00:16:34.075 "data_size": 7936 00:16:34.075 } 00:16:34.075 ] 00:16:34.075 }' 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.075 16:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.335 [2024-12-07 16:42:32.976555] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.335 [2024-12-07 16:42:33.020333] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:34.335 [2024-12-07 16:42:33.020498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.335 [2024-12-07 16:42:33.020523] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.335 [2024-12-07 16:42:33.020540] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.335 "name": "raid_bdev1", 00:16:34.335 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:34.335 "strip_size_kb": 0, 00:16:34.335 "state": "online", 00:16:34.335 "raid_level": "raid1", 00:16:34.335 "superblock": true, 00:16:34.335 "num_base_bdevs": 2, 00:16:34.335 "num_base_bdevs_discovered": 1, 00:16:34.335 "num_base_bdevs_operational": 1, 00:16:34.335 "base_bdevs_list": [ 00:16:34.335 { 00:16:34.335 "name": null, 00:16:34.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.335 "is_configured": false, 00:16:34.335 "data_offset": 0, 00:16:34.335 "data_size": 7936 00:16:34.335 }, 00:16:34.335 { 00:16:34.335 "name": "BaseBdev2", 00:16:34.335 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:34.335 "is_configured": true, 00:16:34.335 "data_offset": 256, 00:16:34.335 "data_size": 7936 00:16:34.335 } 00:16:34.335 ] 00:16:34.335 }' 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.335 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.594 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:34.594 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.594 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:34.594 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:34.594 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.594 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.594 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.594 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.594 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.594 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.594 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.594 "name": "raid_bdev1", 00:16:34.594 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:34.594 "strip_size_kb": 0, 00:16:34.594 "state": "online", 00:16:34.594 "raid_level": "raid1", 00:16:34.594 "superblock": true, 00:16:34.594 "num_base_bdevs": 2, 00:16:34.594 "num_base_bdevs_discovered": 1, 00:16:34.594 "num_base_bdevs_operational": 1, 00:16:34.594 "base_bdevs_list": [ 00:16:34.594 { 00:16:34.594 "name": null, 00:16:34.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.594 "is_configured": false, 00:16:34.594 "data_offset": 0, 00:16:34.594 "data_size": 7936 00:16:34.594 }, 00:16:34.594 { 00:16:34.594 "name": "BaseBdev2", 00:16:34.595 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:34.595 "is_configured": true, 00:16:34.595 "data_offset": 256, 00:16:34.595 "data_size": 7936 00:16:34.595 } 00:16:34.595 ] 00:16:34.595 }' 00:16:34.595 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.595 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:34.854 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.854 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:34.854 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:34.854 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.854 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.854 [2024-12-07 16:42:33.545595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.854 [2024-12-07 16:42:33.548592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:16:34.854 [2024-12-07 16:42:33.550838] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:34.854 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.854 16:42:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:35.792 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.792 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.792 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.792 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.792 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.792 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.792 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.792 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.792 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.792 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.792 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.792 "name": "raid_bdev1", 00:16:35.792 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:35.792 "strip_size_kb": 0, 00:16:35.792 "state": "online", 00:16:35.792 "raid_level": "raid1", 00:16:35.792 "superblock": true, 00:16:35.792 "num_base_bdevs": 2, 00:16:35.792 "num_base_bdevs_discovered": 2, 00:16:35.792 "num_base_bdevs_operational": 2, 00:16:35.792 "process": { 00:16:35.792 "type": "rebuild", 00:16:35.792 "target": "spare", 00:16:35.793 "progress": { 00:16:35.793 "blocks": 2560, 00:16:35.793 "percent": 32 00:16:35.793 } 00:16:35.793 }, 00:16:35.793 "base_bdevs_list": [ 00:16:35.793 { 00:16:35.793 "name": "spare", 00:16:35.793 "uuid": "54cbad79-f0f0-5e4b-866d-07037a4a2fc7", 00:16:35.793 "is_configured": true, 00:16:35.793 "data_offset": 256, 00:16:35.793 "data_size": 7936 00:16:35.793 }, 00:16:35.793 { 00:16:35.793 "name": "BaseBdev2", 00:16:35.793 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:35.793 "is_configured": true, 00:16:35.793 "data_offset": 256, 00:16:35.793 "data_size": 7936 00:16:35.793 } 00:16:35.793 ] 00:16:35.793 }' 00:16:35.793 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.793 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.793 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:36.057 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=606 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.057 "name": "raid_bdev1", 00:16:36.057 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:36.057 "strip_size_kb": 0, 00:16:36.057 "state": "online", 00:16:36.057 "raid_level": "raid1", 00:16:36.057 "superblock": true, 00:16:36.057 "num_base_bdevs": 2, 00:16:36.057 "num_base_bdevs_discovered": 2, 00:16:36.057 "num_base_bdevs_operational": 2, 00:16:36.057 "process": { 00:16:36.057 "type": "rebuild", 00:16:36.057 "target": "spare", 00:16:36.057 "progress": { 00:16:36.057 "blocks": 2816, 00:16:36.057 "percent": 35 00:16:36.057 } 00:16:36.057 }, 00:16:36.057 "base_bdevs_list": [ 00:16:36.057 { 00:16:36.057 "name": "spare", 00:16:36.057 "uuid": "54cbad79-f0f0-5e4b-866d-07037a4a2fc7", 00:16:36.057 "is_configured": true, 00:16:36.057 "data_offset": 256, 00:16:36.057 "data_size": 7936 00:16:36.057 }, 00:16:36.057 { 00:16:36.057 "name": "BaseBdev2", 00:16:36.057 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:36.057 "is_configured": true, 00:16:36.057 "data_offset": 256, 00:16:36.057 "data_size": 7936 00:16:36.057 } 00:16:36.057 ] 00:16:36.057 }' 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.057 16:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.013 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.013 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.013 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.013 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.013 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.013 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.013 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.013 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.013 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.013 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.013 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.013 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.013 "name": "raid_bdev1", 00:16:37.013 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:37.013 "strip_size_kb": 0, 00:16:37.013 "state": "online", 00:16:37.013 "raid_level": "raid1", 00:16:37.013 "superblock": true, 00:16:37.013 "num_base_bdevs": 2, 00:16:37.013 "num_base_bdevs_discovered": 2, 00:16:37.013 "num_base_bdevs_operational": 2, 00:16:37.013 "process": { 00:16:37.013 "type": "rebuild", 00:16:37.013 "target": "spare", 00:16:37.013 "progress": { 00:16:37.013 "blocks": 5632, 00:16:37.013 "percent": 70 00:16:37.013 } 00:16:37.013 }, 00:16:37.013 "base_bdevs_list": [ 00:16:37.013 { 00:16:37.013 "name": "spare", 00:16:37.013 "uuid": "54cbad79-f0f0-5e4b-866d-07037a4a2fc7", 00:16:37.013 "is_configured": true, 00:16:37.013 "data_offset": 256, 00:16:37.013 "data_size": 7936 00:16:37.013 }, 00:16:37.013 { 00:16:37.013 "name": "BaseBdev2", 00:16:37.013 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:37.013 "is_configured": true, 00:16:37.013 "data_offset": 256, 00:16:37.013 "data_size": 7936 00:16:37.013 } 00:16:37.013 ] 00:16:37.013 }' 00:16:37.013 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.272 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.272 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.272 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.272 16:42:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.840 [2024-12-07 16:42:36.676022] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:37.840 [2024-12-07 16:42:36.676206] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:37.840 [2024-12-07 16:42:36.676414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.099 16:42:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.099 16:42:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.099 16:42:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.099 16:42:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.099 16:42:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.099 16:42:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.099 16:42:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.099 16:42:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.099 16:42:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.099 16:42:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.359 16:42:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.359 "name": "raid_bdev1", 00:16:38.359 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:38.359 "strip_size_kb": 0, 00:16:38.359 "state": "online", 00:16:38.359 "raid_level": "raid1", 00:16:38.359 "superblock": true, 00:16:38.359 "num_base_bdevs": 2, 00:16:38.359 "num_base_bdevs_discovered": 2, 00:16:38.359 "num_base_bdevs_operational": 2, 00:16:38.359 "base_bdevs_list": [ 00:16:38.359 { 00:16:38.359 "name": "spare", 00:16:38.359 "uuid": "54cbad79-f0f0-5e4b-866d-07037a4a2fc7", 00:16:38.359 "is_configured": true, 00:16:38.359 "data_offset": 256, 00:16:38.359 "data_size": 7936 00:16:38.359 }, 00:16:38.359 { 00:16:38.359 "name": "BaseBdev2", 00:16:38.359 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:38.359 "is_configured": true, 00:16:38.359 "data_offset": 256, 00:16:38.359 "data_size": 7936 00:16:38.359 } 00:16:38.359 ] 00:16:38.359 }' 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.359 "name": "raid_bdev1", 00:16:38.359 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:38.359 "strip_size_kb": 0, 00:16:38.359 "state": "online", 00:16:38.359 "raid_level": "raid1", 00:16:38.359 "superblock": true, 00:16:38.359 "num_base_bdevs": 2, 00:16:38.359 "num_base_bdevs_discovered": 2, 00:16:38.359 "num_base_bdevs_operational": 2, 00:16:38.359 "base_bdevs_list": [ 00:16:38.359 { 00:16:38.359 "name": "spare", 00:16:38.359 "uuid": "54cbad79-f0f0-5e4b-866d-07037a4a2fc7", 00:16:38.359 "is_configured": true, 00:16:38.359 "data_offset": 256, 00:16:38.359 "data_size": 7936 00:16:38.359 }, 00:16:38.359 { 00:16:38.359 "name": "BaseBdev2", 00:16:38.359 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:38.359 "is_configured": true, 00:16:38.359 "data_offset": 256, 00:16:38.359 "data_size": 7936 00:16:38.359 } 00:16:38.359 ] 00:16:38.359 }' 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.359 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.619 "name": "raid_bdev1", 00:16:38.619 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:38.619 "strip_size_kb": 0, 00:16:38.619 "state": "online", 00:16:38.619 "raid_level": "raid1", 00:16:38.619 "superblock": true, 00:16:38.619 "num_base_bdevs": 2, 00:16:38.619 "num_base_bdevs_discovered": 2, 00:16:38.619 "num_base_bdevs_operational": 2, 00:16:38.619 "base_bdevs_list": [ 00:16:38.619 { 00:16:38.619 "name": "spare", 00:16:38.619 "uuid": "54cbad79-f0f0-5e4b-866d-07037a4a2fc7", 00:16:38.619 "is_configured": true, 00:16:38.619 "data_offset": 256, 00:16:38.619 "data_size": 7936 00:16:38.619 }, 00:16:38.619 { 00:16:38.619 "name": "BaseBdev2", 00:16:38.619 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:38.619 "is_configured": true, 00:16:38.619 "data_offset": 256, 00:16:38.619 "data_size": 7936 00:16:38.619 } 00:16:38.619 ] 00:16:38.619 }' 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.619 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.880 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:38.880 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.880 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.880 [2024-12-07 16:42:37.724592] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.880 [2024-12-07 16:42:37.724646] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.880 [2024-12-07 16:42:37.724798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.880 [2024-12-07 16:42:37.724910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.880 [2024-12-07 16:42:37.724930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:38.880 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.880 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:38.880 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.880 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.880 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.880 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.141 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:39.141 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:39.141 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:39.141 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:39.141 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:39.141 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:39.141 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:39.141 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:39.141 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:39.141 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:39.141 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:39.141 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:39.141 16:42:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:39.141 /dev/nbd0 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.141 1+0 records in 00:16:39.141 1+0 records out 00:16:39.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682009 s, 6.0 MB/s 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:39.141 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:39.401 /dev/nbd1 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.401 1+0 records in 00:16:39.401 1+0 records out 00:16:39.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430796 s, 9.5 MB/s 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:39.401 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.661 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:39.661 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:39.661 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.661 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:39.661 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:39.661 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:39.661 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:39.661 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:39.661 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:39.661 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:39.661 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:39.661 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:39.921 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:39.921 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:39.921 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:39.921 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:39.921 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:39.921 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:39.921 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:39.921 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:39.921 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:39.921 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.182 [2024-12-07 16:42:38.869163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:40.182 [2024-12-07 16:42:38.869238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.182 [2024-12-07 16:42:38.869276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:40.182 [2024-12-07 16:42:38.869294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.182 [2024-12-07 16:42:38.871677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.182 [2024-12-07 16:42:38.871721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:40.182 [2024-12-07 16:42:38.871799] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:40.182 [2024-12-07 16:42:38.871864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:40.182 [2024-12-07 16:42:38.872003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.182 spare 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.182 [2024-12-07 16:42:38.971930] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:40.182 [2024-12-07 16:42:38.971987] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:40.182 [2024-12-07 16:42:38.972176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:16:40.182 [2024-12-07 16:42:38.972385] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:40.182 [2024-12-07 16:42:38.972400] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:40.182 [2024-12-07 16:42:38.972544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.182 16:42:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.182 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.182 "name": "raid_bdev1", 00:16:40.182 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:40.182 "strip_size_kb": 0, 00:16:40.182 "state": "online", 00:16:40.182 "raid_level": "raid1", 00:16:40.182 "superblock": true, 00:16:40.182 "num_base_bdevs": 2, 00:16:40.182 "num_base_bdevs_discovered": 2, 00:16:40.182 "num_base_bdevs_operational": 2, 00:16:40.182 "base_bdevs_list": [ 00:16:40.182 { 00:16:40.182 "name": "spare", 00:16:40.182 "uuid": "54cbad79-f0f0-5e4b-866d-07037a4a2fc7", 00:16:40.182 "is_configured": true, 00:16:40.182 "data_offset": 256, 00:16:40.182 "data_size": 7936 00:16:40.182 }, 00:16:40.182 { 00:16:40.182 "name": "BaseBdev2", 00:16:40.182 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:40.182 "is_configured": true, 00:16:40.183 "data_offset": 256, 00:16:40.183 "data_size": 7936 00:16:40.183 } 00:16:40.183 ] 00:16:40.183 }' 00:16:40.183 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.183 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.753 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.753 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.753 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.753 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.753 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.753 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.753 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.753 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.753 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.753 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.753 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.753 "name": "raid_bdev1", 00:16:40.753 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:40.753 "strip_size_kb": 0, 00:16:40.753 "state": "online", 00:16:40.753 "raid_level": "raid1", 00:16:40.753 "superblock": true, 00:16:40.753 "num_base_bdevs": 2, 00:16:40.753 "num_base_bdevs_discovered": 2, 00:16:40.753 "num_base_bdevs_operational": 2, 00:16:40.753 "base_bdevs_list": [ 00:16:40.753 { 00:16:40.753 "name": "spare", 00:16:40.753 "uuid": "54cbad79-f0f0-5e4b-866d-07037a4a2fc7", 00:16:40.753 "is_configured": true, 00:16:40.753 "data_offset": 256, 00:16:40.753 "data_size": 7936 00:16:40.753 }, 00:16:40.753 { 00:16:40.753 "name": "BaseBdev2", 00:16:40.753 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:40.753 "is_configured": true, 00:16:40.753 "data_offset": 256, 00:16:40.753 "data_size": 7936 00:16:40.753 } 00:16:40.753 ] 00:16:40.753 }' 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.754 [2024-12-07 16:42:39.572028] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.754 "name": "raid_bdev1", 00:16:40.754 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:40.754 "strip_size_kb": 0, 00:16:40.754 "state": "online", 00:16:40.754 "raid_level": "raid1", 00:16:40.754 "superblock": true, 00:16:40.754 "num_base_bdevs": 2, 00:16:40.754 "num_base_bdevs_discovered": 1, 00:16:40.754 "num_base_bdevs_operational": 1, 00:16:40.754 "base_bdevs_list": [ 00:16:40.754 { 00:16:40.754 "name": null, 00:16:40.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.754 "is_configured": false, 00:16:40.754 "data_offset": 0, 00:16:40.754 "data_size": 7936 00:16:40.754 }, 00:16:40.754 { 00:16:40.754 "name": "BaseBdev2", 00:16:40.754 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:40.754 "is_configured": true, 00:16:40.754 "data_offset": 256, 00:16:40.754 "data_size": 7936 00:16:40.754 } 00:16:40.754 ] 00:16:40.754 }' 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.754 16:42:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.324 16:42:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:41.324 16:42:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.324 16:42:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.324 [2024-12-07 16:42:40.011315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.324 [2024-12-07 16:42:40.011666] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:41.324 [2024-12-07 16:42:40.011746] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:41.324 [2024-12-07 16:42:40.011830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.324 [2024-12-07 16:42:40.014764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:16:41.324 [2024-12-07 16:42:40.017125] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:41.324 16:42:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.324 16:42:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.265 "name": "raid_bdev1", 00:16:42.265 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:42.265 "strip_size_kb": 0, 00:16:42.265 "state": "online", 00:16:42.265 "raid_level": "raid1", 00:16:42.265 "superblock": true, 00:16:42.265 "num_base_bdevs": 2, 00:16:42.265 "num_base_bdevs_discovered": 2, 00:16:42.265 "num_base_bdevs_operational": 2, 00:16:42.265 "process": { 00:16:42.265 "type": "rebuild", 00:16:42.265 "target": "spare", 00:16:42.265 "progress": { 00:16:42.265 "blocks": 2560, 00:16:42.265 "percent": 32 00:16:42.265 } 00:16:42.265 }, 00:16:42.265 "base_bdevs_list": [ 00:16:42.265 { 00:16:42.265 "name": "spare", 00:16:42.265 "uuid": "54cbad79-f0f0-5e4b-866d-07037a4a2fc7", 00:16:42.265 "is_configured": true, 00:16:42.265 "data_offset": 256, 00:16:42.265 "data_size": 7936 00:16:42.265 }, 00:16:42.265 { 00:16:42.265 "name": "BaseBdev2", 00:16:42.265 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:42.265 "is_configured": true, 00:16:42.265 "data_offset": 256, 00:16:42.265 "data_size": 7936 00:16:42.265 } 00:16:42.265 ] 00:16:42.265 }' 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:42.265 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.266 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.266 [2024-12-07 16:42:41.136947] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.527 [2024-12-07 16:42:41.226583] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:42.527 [2024-12-07 16:42:41.226767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.527 [2024-12-07 16:42:41.226793] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.527 [2024-12-07 16:42:41.226802] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.527 "name": "raid_bdev1", 00:16:42.527 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:42.527 "strip_size_kb": 0, 00:16:42.527 "state": "online", 00:16:42.527 "raid_level": "raid1", 00:16:42.527 "superblock": true, 00:16:42.527 "num_base_bdevs": 2, 00:16:42.527 "num_base_bdevs_discovered": 1, 00:16:42.527 "num_base_bdevs_operational": 1, 00:16:42.527 "base_bdevs_list": [ 00:16:42.527 { 00:16:42.527 "name": null, 00:16:42.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.527 "is_configured": false, 00:16:42.527 "data_offset": 0, 00:16:42.527 "data_size": 7936 00:16:42.527 }, 00:16:42.527 { 00:16:42.527 "name": "BaseBdev2", 00:16:42.527 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:42.527 "is_configured": true, 00:16:42.527 "data_offset": 256, 00:16:42.527 "data_size": 7936 00:16:42.527 } 00:16:42.527 ] 00:16:42.527 }' 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.527 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.788 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:42.788 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.788 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.788 [2024-12-07 16:42:41.679787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:42.788 [2024-12-07 16:42:41.679937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.788 [2024-12-07 16:42:41.679991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:42.788 [2024-12-07 16:42:41.680023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.788 [2024-12-07 16:42:41.680365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.788 [2024-12-07 16:42:41.680419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:42.788 [2024-12-07 16:42:41.680531] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:42.788 [2024-12-07 16:42:41.680569] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:42.788 [2024-12-07 16:42:41.680618] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:42.788 [2024-12-07 16:42:41.680680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.788 [2024-12-07 16:42:41.683763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:42.788 spare 00:16:42.788 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.048 16:42:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:43.048 [2024-12-07 16:42:41.686156] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:43.987 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.987 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.987 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.987 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.988 "name": "raid_bdev1", 00:16:43.988 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:43.988 "strip_size_kb": 0, 00:16:43.988 "state": "online", 00:16:43.988 "raid_level": "raid1", 00:16:43.988 "superblock": true, 00:16:43.988 "num_base_bdevs": 2, 00:16:43.988 "num_base_bdevs_discovered": 2, 00:16:43.988 "num_base_bdevs_operational": 2, 00:16:43.988 "process": { 00:16:43.988 "type": "rebuild", 00:16:43.988 "target": "spare", 00:16:43.988 "progress": { 00:16:43.988 "blocks": 2560, 00:16:43.988 "percent": 32 00:16:43.988 } 00:16:43.988 }, 00:16:43.988 "base_bdevs_list": [ 00:16:43.988 { 00:16:43.988 "name": "spare", 00:16:43.988 "uuid": "54cbad79-f0f0-5e4b-866d-07037a4a2fc7", 00:16:43.988 "is_configured": true, 00:16:43.988 "data_offset": 256, 00:16:43.988 "data_size": 7936 00:16:43.988 }, 00:16:43.988 { 00:16:43.988 "name": "BaseBdev2", 00:16:43.988 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:43.988 "is_configured": true, 00:16:43.988 "data_offset": 256, 00:16:43.988 "data_size": 7936 00:16:43.988 } 00:16:43.988 ] 00:16:43.988 }' 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.988 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.988 [2024-12-07 16:42:42.832003] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.248 [2024-12-07 16:42:42.895809] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:44.248 [2024-12-07 16:42:42.895926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.248 [2024-12-07 16:42:42.895943] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.248 [2024-12-07 16:42:42.895955] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:44.248 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.248 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.249 "name": "raid_bdev1", 00:16:44.249 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:44.249 "strip_size_kb": 0, 00:16:44.249 "state": "online", 00:16:44.249 "raid_level": "raid1", 00:16:44.249 "superblock": true, 00:16:44.249 "num_base_bdevs": 2, 00:16:44.249 "num_base_bdevs_discovered": 1, 00:16:44.249 "num_base_bdevs_operational": 1, 00:16:44.249 "base_bdevs_list": [ 00:16:44.249 { 00:16:44.249 "name": null, 00:16:44.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.249 "is_configured": false, 00:16:44.249 "data_offset": 0, 00:16:44.249 "data_size": 7936 00:16:44.249 }, 00:16:44.249 { 00:16:44.249 "name": "BaseBdev2", 00:16:44.249 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:44.249 "is_configured": true, 00:16:44.249 "data_offset": 256, 00:16:44.249 "data_size": 7936 00:16:44.249 } 00:16:44.249 ] 00:16:44.249 }' 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.249 16:42:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.509 "name": "raid_bdev1", 00:16:44.509 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:44.509 "strip_size_kb": 0, 00:16:44.509 "state": "online", 00:16:44.509 "raid_level": "raid1", 00:16:44.509 "superblock": true, 00:16:44.509 "num_base_bdevs": 2, 00:16:44.509 "num_base_bdevs_discovered": 1, 00:16:44.509 "num_base_bdevs_operational": 1, 00:16:44.509 "base_bdevs_list": [ 00:16:44.509 { 00:16:44.509 "name": null, 00:16:44.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.509 "is_configured": false, 00:16:44.509 "data_offset": 0, 00:16:44.509 "data_size": 7936 00:16:44.509 }, 00:16:44.509 { 00:16:44.509 "name": "BaseBdev2", 00:16:44.509 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:44.509 "is_configured": true, 00:16:44.509 "data_offset": 256, 00:16:44.509 "data_size": 7936 00:16:44.509 } 00:16:44.509 ] 00:16:44.509 }' 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.509 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.770 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.770 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:44.770 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.770 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.770 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.770 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:44.770 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.770 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.770 [2024-12-07 16:42:43.472701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:44.770 [2024-12-07 16:42:43.472783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.770 [2024-12-07 16:42:43.472807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:44.770 [2024-12-07 16:42:43.472820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.770 [2024-12-07 16:42:43.473087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.770 [2024-12-07 16:42:43.473105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:44.770 [2024-12-07 16:42:43.473164] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:44.770 [2024-12-07 16:42:43.473184] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:44.770 [2024-12-07 16:42:43.473193] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:44.770 [2024-12-07 16:42:43.473210] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:44.770 BaseBdev1 00:16:44.770 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.770 16:42:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.709 "name": "raid_bdev1", 00:16:45.709 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:45.709 "strip_size_kb": 0, 00:16:45.709 "state": "online", 00:16:45.709 "raid_level": "raid1", 00:16:45.709 "superblock": true, 00:16:45.709 "num_base_bdevs": 2, 00:16:45.709 "num_base_bdevs_discovered": 1, 00:16:45.709 "num_base_bdevs_operational": 1, 00:16:45.709 "base_bdevs_list": [ 00:16:45.709 { 00:16:45.709 "name": null, 00:16:45.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.709 "is_configured": false, 00:16:45.709 "data_offset": 0, 00:16:45.709 "data_size": 7936 00:16:45.709 }, 00:16:45.709 { 00:16:45.709 "name": "BaseBdev2", 00:16:45.709 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:45.709 "is_configured": true, 00:16:45.709 "data_offset": 256, 00:16:45.709 "data_size": 7936 00:16:45.709 } 00:16:45.709 ] 00:16:45.709 }' 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.709 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.278 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.278 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.278 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.278 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.278 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.278 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.278 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 16:42:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.278 "name": "raid_bdev1", 00:16:46.278 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:46.278 "strip_size_kb": 0, 00:16:46.278 "state": "online", 00:16:46.278 "raid_level": "raid1", 00:16:46.278 "superblock": true, 00:16:46.278 "num_base_bdevs": 2, 00:16:46.278 "num_base_bdevs_discovered": 1, 00:16:46.278 "num_base_bdevs_operational": 1, 00:16:46.278 "base_bdevs_list": [ 00:16:46.278 { 00:16:46.278 "name": null, 00:16:46.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.278 "is_configured": false, 00:16:46.278 "data_offset": 0, 00:16:46.278 "data_size": 7936 00:16:46.278 }, 00:16:46.278 { 00:16:46.278 "name": "BaseBdev2", 00:16:46.278 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:46.278 "is_configured": true, 00:16:46.278 "data_offset": 256, 00:16:46.278 "data_size": 7936 00:16:46.278 } 00:16:46.278 ] 00:16:46.278 }' 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 [2024-12-07 16:42:45.133953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.278 [2024-12-07 16:42:45.134230] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:46.278 [2024-12-07 16:42:45.134288] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:46.278 request: 00:16:46.278 { 00:16:46.278 "base_bdev": "BaseBdev1", 00:16:46.278 "raid_bdev": "raid_bdev1", 00:16:46.278 "method": "bdev_raid_add_base_bdev", 00:16:46.278 "req_id": 1 00:16:46.278 } 00:16:46.278 Got JSON-RPC error response 00:16:46.278 response: 00:16:46.278 { 00:16:46.278 "code": -22, 00:16:46.278 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:46.278 } 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:46.278 16:42:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.658 "name": "raid_bdev1", 00:16:47.658 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:47.658 "strip_size_kb": 0, 00:16:47.658 "state": "online", 00:16:47.658 "raid_level": "raid1", 00:16:47.658 "superblock": true, 00:16:47.658 "num_base_bdevs": 2, 00:16:47.658 "num_base_bdevs_discovered": 1, 00:16:47.658 "num_base_bdevs_operational": 1, 00:16:47.658 "base_bdevs_list": [ 00:16:47.658 { 00:16:47.658 "name": null, 00:16:47.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.658 "is_configured": false, 00:16:47.658 "data_offset": 0, 00:16:47.658 "data_size": 7936 00:16:47.658 }, 00:16:47.658 { 00:16:47.658 "name": "BaseBdev2", 00:16:47.658 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:47.658 "is_configured": true, 00:16:47.658 "data_offset": 256, 00:16:47.658 "data_size": 7936 00:16:47.658 } 00:16:47.658 ] 00:16:47.658 }' 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.658 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.918 "name": "raid_bdev1", 00:16:47.918 "uuid": "d4a5796d-9ed1-44d7-a5d3-53429929d90e", 00:16:47.918 "strip_size_kb": 0, 00:16:47.918 "state": "online", 00:16:47.918 "raid_level": "raid1", 00:16:47.918 "superblock": true, 00:16:47.918 "num_base_bdevs": 2, 00:16:47.918 "num_base_bdevs_discovered": 1, 00:16:47.918 "num_base_bdevs_operational": 1, 00:16:47.918 "base_bdevs_list": [ 00:16:47.918 { 00:16:47.918 "name": null, 00:16:47.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.918 "is_configured": false, 00:16:47.918 "data_offset": 0, 00:16:47.918 "data_size": 7936 00:16:47.918 }, 00:16:47.918 { 00:16:47.918 "name": "BaseBdev2", 00:16:47.918 "uuid": "f7cadd78-0ba7-5025-ba57-de9684c94e00", 00:16:47.918 "is_configured": true, 00:16:47.918 "data_offset": 256, 00:16:47.918 "data_size": 7936 00:16:47.918 } 00:16:47.918 ] 00:16:47.918 }' 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98447 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98447 ']' 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98447 00:16:47.918 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:47.919 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:47.919 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98447 00:16:47.919 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:47.919 killing process with pid 98447 00:16:47.919 Received shutdown signal, test time was about 60.000000 seconds 00:16:47.919 00:16:47.919 Latency(us) 00:16:47.919 [2024-12-07T16:42:46.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.919 [2024-12-07T16:42:46.818Z] =================================================================================================================== 00:16:47.919 [2024-12-07T16:42:46.818Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:47.919 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:47.919 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98447' 00:16:47.919 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98447 00:16:47.919 [2024-12-07 16:42:46.797374] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.919 [2024-12-07 16:42:46.797560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.919 16:42:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98447 00:16:47.919 [2024-12-07 16:42:46.797622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.919 [2024-12-07 16:42:46.797632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:48.179 [2024-12-07 16:42:46.860748] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.439 16:42:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:48.439 00:16:48.439 real 0m18.454s 00:16:48.439 user 0m24.187s 00:16:48.439 sys 0m2.745s 00:16:48.439 16:42:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:48.439 ************************************ 00:16:48.439 END TEST raid_rebuild_test_sb_md_separate 00:16:48.439 ************************************ 00:16:48.439 16:42:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.439 16:42:47 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:48.439 16:42:47 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:48.439 16:42:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:48.439 16:42:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:48.439 16:42:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.439 ************************************ 00:16:48.439 START TEST raid_state_function_test_sb_md_interleaved 00:16:48.439 ************************************ 00:16:48.439 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:48.439 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:48.439 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:48.439 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:48.439 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:48.439 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:48.439 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:48.440 Process raid pid: 99128 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=99128 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99128' 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 99128 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99128 ']' 00:16:48.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.440 16:42:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.700 [2024-12-07 16:42:47.393631] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:48.700 [2024-12-07 16:42:47.393769] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.700 [2024-12-07 16:42:47.536176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.960 [2024-12-07 16:42:47.617029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.960 [2024-12-07 16:42:47.695224] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.960 [2024-12-07 16:42:47.695395] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.529 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:49.529 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:49.529 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:49.529 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.529 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.529 [2024-12-07 16:42:48.248064] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.529 [2024-12-07 16:42:48.248242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.529 [2024-12-07 16:42:48.248280] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.529 [2024-12-07 16:42:48.248305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.530 "name": "Existed_Raid", 00:16:49.530 "uuid": "4b7078ab-cff8-4bac-9722-d948dfb3d7ea", 00:16:49.530 "strip_size_kb": 0, 00:16:49.530 "state": "configuring", 00:16:49.530 "raid_level": "raid1", 00:16:49.530 "superblock": true, 00:16:49.530 "num_base_bdevs": 2, 00:16:49.530 "num_base_bdevs_discovered": 0, 00:16:49.530 "num_base_bdevs_operational": 2, 00:16:49.530 "base_bdevs_list": [ 00:16:49.530 { 00:16:49.530 "name": "BaseBdev1", 00:16:49.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.530 "is_configured": false, 00:16:49.530 "data_offset": 0, 00:16:49.530 "data_size": 0 00:16:49.530 }, 00:16:49.530 { 00:16:49.530 "name": "BaseBdev2", 00:16:49.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.530 "is_configured": false, 00:16:49.530 "data_offset": 0, 00:16:49.530 "data_size": 0 00:16:49.530 } 00:16:49.530 ] 00:16:49.530 }' 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.530 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.790 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:49.790 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.790 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.790 [2024-12-07 16:42:48.671219] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.790 [2024-12-07 16:42:48.671393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:49.790 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.790 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:49.790 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.790 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.790 [2024-12-07 16:42:48.683243] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.790 [2024-12-07 16:42:48.683370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.790 [2024-12-07 16:42:48.683403] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.790 [2024-12-07 16:42:48.683428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.050 [2024-12-07 16:42:48.711228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.050 BaseBdev1 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.050 [ 00:16:50.050 { 00:16:50.050 "name": "BaseBdev1", 00:16:50.050 "aliases": [ 00:16:50.050 "21fcb3b6-d2c9-4b59-8c19-ad75c0cd9819" 00:16:50.050 ], 00:16:50.050 "product_name": "Malloc disk", 00:16:50.050 "block_size": 4128, 00:16:50.050 "num_blocks": 8192, 00:16:50.050 "uuid": "21fcb3b6-d2c9-4b59-8c19-ad75c0cd9819", 00:16:50.050 "md_size": 32, 00:16:50.050 "md_interleave": true, 00:16:50.050 "dif_type": 0, 00:16:50.050 "assigned_rate_limits": { 00:16:50.050 "rw_ios_per_sec": 0, 00:16:50.050 "rw_mbytes_per_sec": 0, 00:16:50.050 "r_mbytes_per_sec": 0, 00:16:50.050 "w_mbytes_per_sec": 0 00:16:50.050 }, 00:16:50.050 "claimed": true, 00:16:50.050 "claim_type": "exclusive_write", 00:16:50.050 "zoned": false, 00:16:50.050 "supported_io_types": { 00:16:50.050 "read": true, 00:16:50.050 "write": true, 00:16:50.050 "unmap": true, 00:16:50.050 "flush": true, 00:16:50.050 "reset": true, 00:16:50.050 "nvme_admin": false, 00:16:50.050 "nvme_io": false, 00:16:50.050 "nvme_io_md": false, 00:16:50.050 "write_zeroes": true, 00:16:50.050 "zcopy": true, 00:16:50.050 "get_zone_info": false, 00:16:50.050 "zone_management": false, 00:16:50.050 "zone_append": false, 00:16:50.050 "compare": false, 00:16:50.050 "compare_and_write": false, 00:16:50.050 "abort": true, 00:16:50.050 "seek_hole": false, 00:16:50.050 "seek_data": false, 00:16:50.050 "copy": true, 00:16:50.050 "nvme_iov_md": false 00:16:50.050 }, 00:16:50.050 "memory_domains": [ 00:16:50.050 { 00:16:50.050 "dma_device_id": "system", 00:16:50.050 "dma_device_type": 1 00:16:50.050 }, 00:16:50.050 { 00:16:50.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.050 "dma_device_type": 2 00:16:50.050 } 00:16:50.050 ], 00:16:50.050 "driver_specific": {} 00:16:50.050 } 00:16:50.050 ] 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.050 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.050 "name": "Existed_Raid", 00:16:50.050 "uuid": "2cb501d8-08d9-4322-b9e6-2b15c133d214", 00:16:50.050 "strip_size_kb": 0, 00:16:50.050 "state": "configuring", 00:16:50.050 "raid_level": "raid1", 00:16:50.050 "superblock": true, 00:16:50.050 "num_base_bdevs": 2, 00:16:50.050 "num_base_bdevs_discovered": 1, 00:16:50.050 "num_base_bdevs_operational": 2, 00:16:50.050 "base_bdevs_list": [ 00:16:50.050 { 00:16:50.050 "name": "BaseBdev1", 00:16:50.050 "uuid": "21fcb3b6-d2c9-4b59-8c19-ad75c0cd9819", 00:16:50.050 "is_configured": true, 00:16:50.050 "data_offset": 256, 00:16:50.051 "data_size": 7936 00:16:50.051 }, 00:16:50.051 { 00:16:50.051 "name": "BaseBdev2", 00:16:50.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.051 "is_configured": false, 00:16:50.051 "data_offset": 0, 00:16:50.051 "data_size": 0 00:16:50.051 } 00:16:50.051 ] 00:16:50.051 }' 00:16:50.051 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.051 16:42:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.310 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:50.310 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.310 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.310 [2024-12-07 16:42:49.202506] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.310 [2024-12-07 16:42:49.202652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.570 [2024-12-07 16:42:49.214609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.570 [2024-12-07 16:42:49.216950] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.570 [2024-12-07 16:42:49.217039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.570 "name": "Existed_Raid", 00:16:50.570 "uuid": "6ac65b9a-3ae1-49f2-acfb-c18e65e1fd65", 00:16:50.570 "strip_size_kb": 0, 00:16:50.570 "state": "configuring", 00:16:50.570 "raid_level": "raid1", 00:16:50.570 "superblock": true, 00:16:50.570 "num_base_bdevs": 2, 00:16:50.570 "num_base_bdevs_discovered": 1, 00:16:50.570 "num_base_bdevs_operational": 2, 00:16:50.570 "base_bdevs_list": [ 00:16:50.570 { 00:16:50.570 "name": "BaseBdev1", 00:16:50.570 "uuid": "21fcb3b6-d2c9-4b59-8c19-ad75c0cd9819", 00:16:50.570 "is_configured": true, 00:16:50.570 "data_offset": 256, 00:16:50.570 "data_size": 7936 00:16:50.570 }, 00:16:50.570 { 00:16:50.570 "name": "BaseBdev2", 00:16:50.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.570 "is_configured": false, 00:16:50.570 "data_offset": 0, 00:16:50.570 "data_size": 0 00:16:50.570 } 00:16:50.570 ] 00:16:50.570 }' 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.570 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.831 [2024-12-07 16:42:49.634881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.831 BaseBdev2 00:16:50.831 [2024-12-07 16:42:49.635261] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:50.831 [2024-12-07 16:42:49.635284] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:50.831 [2024-12-07 16:42:49.635468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:50.831 [2024-12-07 16:42:49.635558] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:50.831 [2024-12-07 16:42:49.635578] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:50.831 [2024-12-07 16:42:49.635664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.831 [ 00:16:50.831 { 00:16:50.831 "name": "BaseBdev2", 00:16:50.831 "aliases": [ 00:16:50.831 "02c2400f-a388-4ad8-9d12-8800c6f48384" 00:16:50.831 ], 00:16:50.831 "product_name": "Malloc disk", 00:16:50.831 "block_size": 4128, 00:16:50.831 "num_blocks": 8192, 00:16:50.831 "uuid": "02c2400f-a388-4ad8-9d12-8800c6f48384", 00:16:50.831 "md_size": 32, 00:16:50.831 "md_interleave": true, 00:16:50.831 "dif_type": 0, 00:16:50.831 "assigned_rate_limits": { 00:16:50.831 "rw_ios_per_sec": 0, 00:16:50.831 "rw_mbytes_per_sec": 0, 00:16:50.831 "r_mbytes_per_sec": 0, 00:16:50.831 "w_mbytes_per_sec": 0 00:16:50.831 }, 00:16:50.831 "claimed": true, 00:16:50.831 "claim_type": "exclusive_write", 00:16:50.831 "zoned": false, 00:16:50.831 "supported_io_types": { 00:16:50.831 "read": true, 00:16:50.831 "write": true, 00:16:50.831 "unmap": true, 00:16:50.831 "flush": true, 00:16:50.831 "reset": true, 00:16:50.831 "nvme_admin": false, 00:16:50.831 "nvme_io": false, 00:16:50.831 "nvme_io_md": false, 00:16:50.831 "write_zeroes": true, 00:16:50.831 "zcopy": true, 00:16:50.831 "get_zone_info": false, 00:16:50.831 "zone_management": false, 00:16:50.831 "zone_append": false, 00:16:50.831 "compare": false, 00:16:50.831 "compare_and_write": false, 00:16:50.831 "abort": true, 00:16:50.831 "seek_hole": false, 00:16:50.831 "seek_data": false, 00:16:50.831 "copy": true, 00:16:50.831 "nvme_iov_md": false 00:16:50.831 }, 00:16:50.831 "memory_domains": [ 00:16:50.831 { 00:16:50.831 "dma_device_id": "system", 00:16:50.831 "dma_device_type": 1 00:16:50.831 }, 00:16:50.831 { 00:16:50.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.831 "dma_device_type": 2 00:16:50.831 } 00:16:50.831 ], 00:16:50.831 "driver_specific": {} 00:16:50.831 } 00:16:50.831 ] 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.831 "name": "Existed_Raid", 00:16:50.831 "uuid": "6ac65b9a-3ae1-49f2-acfb-c18e65e1fd65", 00:16:50.831 "strip_size_kb": 0, 00:16:50.831 "state": "online", 00:16:50.831 "raid_level": "raid1", 00:16:50.831 "superblock": true, 00:16:50.831 "num_base_bdevs": 2, 00:16:50.831 "num_base_bdevs_discovered": 2, 00:16:50.831 "num_base_bdevs_operational": 2, 00:16:50.831 "base_bdevs_list": [ 00:16:50.831 { 00:16:50.831 "name": "BaseBdev1", 00:16:50.831 "uuid": "21fcb3b6-d2c9-4b59-8c19-ad75c0cd9819", 00:16:50.831 "is_configured": true, 00:16:50.831 "data_offset": 256, 00:16:50.831 "data_size": 7936 00:16:50.831 }, 00:16:50.831 { 00:16:50.831 "name": "BaseBdev2", 00:16:50.831 "uuid": "02c2400f-a388-4ad8-9d12-8800c6f48384", 00:16:50.831 "is_configured": true, 00:16:50.831 "data_offset": 256, 00:16:50.831 "data_size": 7936 00:16:50.831 } 00:16:50.831 ] 00:16:50.831 }' 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.831 16:42:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.401 [2024-12-07 16:42:50.154431] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:51.401 "name": "Existed_Raid", 00:16:51.401 "aliases": [ 00:16:51.401 "6ac65b9a-3ae1-49f2-acfb-c18e65e1fd65" 00:16:51.401 ], 00:16:51.401 "product_name": "Raid Volume", 00:16:51.401 "block_size": 4128, 00:16:51.401 "num_blocks": 7936, 00:16:51.401 "uuid": "6ac65b9a-3ae1-49f2-acfb-c18e65e1fd65", 00:16:51.401 "md_size": 32, 00:16:51.401 "md_interleave": true, 00:16:51.401 "dif_type": 0, 00:16:51.401 "assigned_rate_limits": { 00:16:51.401 "rw_ios_per_sec": 0, 00:16:51.401 "rw_mbytes_per_sec": 0, 00:16:51.401 "r_mbytes_per_sec": 0, 00:16:51.401 "w_mbytes_per_sec": 0 00:16:51.401 }, 00:16:51.401 "claimed": false, 00:16:51.401 "zoned": false, 00:16:51.401 "supported_io_types": { 00:16:51.401 "read": true, 00:16:51.401 "write": true, 00:16:51.401 "unmap": false, 00:16:51.401 "flush": false, 00:16:51.401 "reset": true, 00:16:51.401 "nvme_admin": false, 00:16:51.401 "nvme_io": false, 00:16:51.401 "nvme_io_md": false, 00:16:51.401 "write_zeroes": true, 00:16:51.401 "zcopy": false, 00:16:51.401 "get_zone_info": false, 00:16:51.401 "zone_management": false, 00:16:51.401 "zone_append": false, 00:16:51.401 "compare": false, 00:16:51.401 "compare_and_write": false, 00:16:51.401 "abort": false, 00:16:51.401 "seek_hole": false, 00:16:51.401 "seek_data": false, 00:16:51.401 "copy": false, 00:16:51.401 "nvme_iov_md": false 00:16:51.401 }, 00:16:51.401 "memory_domains": [ 00:16:51.401 { 00:16:51.401 "dma_device_id": "system", 00:16:51.401 "dma_device_type": 1 00:16:51.401 }, 00:16:51.401 { 00:16:51.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.401 "dma_device_type": 2 00:16:51.401 }, 00:16:51.401 { 00:16:51.401 "dma_device_id": "system", 00:16:51.401 "dma_device_type": 1 00:16:51.401 }, 00:16:51.401 { 00:16:51.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.401 "dma_device_type": 2 00:16:51.401 } 00:16:51.401 ], 00:16:51.401 "driver_specific": { 00:16:51.401 "raid": { 00:16:51.401 "uuid": "6ac65b9a-3ae1-49f2-acfb-c18e65e1fd65", 00:16:51.401 "strip_size_kb": 0, 00:16:51.401 "state": "online", 00:16:51.401 "raid_level": "raid1", 00:16:51.401 "superblock": true, 00:16:51.401 "num_base_bdevs": 2, 00:16:51.401 "num_base_bdevs_discovered": 2, 00:16:51.401 "num_base_bdevs_operational": 2, 00:16:51.401 "base_bdevs_list": [ 00:16:51.401 { 00:16:51.401 "name": "BaseBdev1", 00:16:51.401 "uuid": "21fcb3b6-d2c9-4b59-8c19-ad75c0cd9819", 00:16:51.401 "is_configured": true, 00:16:51.401 "data_offset": 256, 00:16:51.401 "data_size": 7936 00:16:51.401 }, 00:16:51.401 { 00:16:51.401 "name": "BaseBdev2", 00:16:51.401 "uuid": "02c2400f-a388-4ad8-9d12-8800c6f48384", 00:16:51.401 "is_configured": true, 00:16:51.401 "data_offset": 256, 00:16:51.401 "data_size": 7936 00:16:51.401 } 00:16:51.401 ] 00:16:51.401 } 00:16:51.401 } 00:16:51.401 }' 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:51.401 BaseBdev2' 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.401 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.661 [2024-12-07 16:42:50.365798] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.661 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.661 "name": "Existed_Raid", 00:16:51.661 "uuid": "6ac65b9a-3ae1-49f2-acfb-c18e65e1fd65", 00:16:51.661 "strip_size_kb": 0, 00:16:51.661 "state": "online", 00:16:51.661 "raid_level": "raid1", 00:16:51.661 "superblock": true, 00:16:51.661 "num_base_bdevs": 2, 00:16:51.661 "num_base_bdevs_discovered": 1, 00:16:51.661 "num_base_bdevs_operational": 1, 00:16:51.661 "base_bdevs_list": [ 00:16:51.661 { 00:16:51.661 "name": null, 00:16:51.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.661 "is_configured": false, 00:16:51.661 "data_offset": 0, 00:16:51.661 "data_size": 7936 00:16:51.661 }, 00:16:51.661 { 00:16:51.661 "name": "BaseBdev2", 00:16:51.661 "uuid": "02c2400f-a388-4ad8-9d12-8800c6f48384", 00:16:51.661 "is_configured": true, 00:16:51.661 "data_offset": 256, 00:16:51.661 "data_size": 7936 00:16:51.661 } 00:16:51.661 ] 00:16:51.661 }' 00:16:51.662 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.662 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.922 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:51.922 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:51.922 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.922 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:51.922 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.922 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.183 [2024-12-07 16:42:50.866760] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:52.183 [2024-12-07 16:42:50.866985] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.183 [2024-12-07 16:42:50.889201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.183 [2024-12-07 16:42:50.889365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.183 [2024-12-07 16:42:50.889429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 99128 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99128 ']' 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99128 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99128 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99128' 00:16:52.183 killing process with pid 99128 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99128 00:16:52.183 [2024-12-07 16:42:50.980615] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.183 16:42:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99128 00:16:52.183 [2024-12-07 16:42:50.982363] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.754 16:42:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:52.754 00:16:52.754 real 0m4.055s 00:16:52.754 user 0m6.139s 00:16:52.754 sys 0m0.954s 00:16:52.754 16:42:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.754 16:42:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.754 ************************************ 00:16:52.754 END TEST raid_state_function_test_sb_md_interleaved 00:16:52.754 ************************************ 00:16:52.754 16:42:51 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:52.754 16:42:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:52.754 16:42:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.754 16:42:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.754 ************************************ 00:16:52.754 START TEST raid_superblock_test_md_interleaved 00:16:52.754 ************************************ 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99370 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99370 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99370 ']' 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.754 16:42:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.754 [2024-12-07 16:42:51.525415] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:52.754 [2024-12-07 16:42:51.525583] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99370 ] 00:16:53.062 [2024-12-07 16:42:51.675878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.062 [2024-12-07 16:42:51.755971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.062 [2024-12-07 16:42:51.832881] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.062 [2024-12-07 16:42:51.832923] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.655 malloc1 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.655 [2024-12-07 16:42:52.409016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:53.655 [2024-12-07 16:42:52.409212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.655 [2024-12-07 16:42:52.409269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:53.655 [2024-12-07 16:42:52.409303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.655 [2024-12-07 16:42:52.411544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.655 [2024-12-07 16:42:52.411630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:53.655 pt1 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:53.655 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.656 malloc2 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.656 [2024-12-07 16:42:52.458042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:53.656 [2024-12-07 16:42:52.458232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.656 [2024-12-07 16:42:52.458270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:53.656 [2024-12-07 16:42:52.458301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.656 [2024-12-07 16:42:52.460508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.656 [2024-12-07 16:42:52.460583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:53.656 pt2 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.656 [2024-12-07 16:42:52.470072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:53.656 [2024-12-07 16:42:52.472260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:53.656 [2024-12-07 16:42:52.472477] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:53.656 [2024-12-07 16:42:52.472529] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:53.656 [2024-12-07 16:42:52.472648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:53.656 [2024-12-07 16:42:52.472759] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:53.656 [2024-12-07 16:42:52.472803] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:53.656 [2024-12-07 16:42:52.472921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.656 "name": "raid_bdev1", 00:16:53.656 "uuid": "615eb021-672c-4332-bc06-64de685646cc", 00:16:53.656 "strip_size_kb": 0, 00:16:53.656 "state": "online", 00:16:53.656 "raid_level": "raid1", 00:16:53.656 "superblock": true, 00:16:53.656 "num_base_bdevs": 2, 00:16:53.656 "num_base_bdevs_discovered": 2, 00:16:53.656 "num_base_bdevs_operational": 2, 00:16:53.656 "base_bdevs_list": [ 00:16:53.656 { 00:16:53.656 "name": "pt1", 00:16:53.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:53.656 "is_configured": true, 00:16:53.656 "data_offset": 256, 00:16:53.656 "data_size": 7936 00:16:53.656 }, 00:16:53.656 { 00:16:53.656 "name": "pt2", 00:16:53.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.656 "is_configured": true, 00:16:53.656 "data_offset": 256, 00:16:53.656 "data_size": 7936 00:16:53.656 } 00:16:53.656 ] 00:16:53.656 }' 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.656 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.230 [2024-12-07 16:42:52.865834] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.230 "name": "raid_bdev1", 00:16:54.230 "aliases": [ 00:16:54.230 "615eb021-672c-4332-bc06-64de685646cc" 00:16:54.230 ], 00:16:54.230 "product_name": "Raid Volume", 00:16:54.230 "block_size": 4128, 00:16:54.230 "num_blocks": 7936, 00:16:54.230 "uuid": "615eb021-672c-4332-bc06-64de685646cc", 00:16:54.230 "md_size": 32, 00:16:54.230 "md_interleave": true, 00:16:54.230 "dif_type": 0, 00:16:54.230 "assigned_rate_limits": { 00:16:54.230 "rw_ios_per_sec": 0, 00:16:54.230 "rw_mbytes_per_sec": 0, 00:16:54.230 "r_mbytes_per_sec": 0, 00:16:54.230 "w_mbytes_per_sec": 0 00:16:54.230 }, 00:16:54.230 "claimed": false, 00:16:54.230 "zoned": false, 00:16:54.230 "supported_io_types": { 00:16:54.230 "read": true, 00:16:54.230 "write": true, 00:16:54.230 "unmap": false, 00:16:54.230 "flush": false, 00:16:54.230 "reset": true, 00:16:54.230 "nvme_admin": false, 00:16:54.230 "nvme_io": false, 00:16:54.230 "nvme_io_md": false, 00:16:54.230 "write_zeroes": true, 00:16:54.230 "zcopy": false, 00:16:54.230 "get_zone_info": false, 00:16:54.230 "zone_management": false, 00:16:54.230 "zone_append": false, 00:16:54.230 "compare": false, 00:16:54.230 "compare_and_write": false, 00:16:54.230 "abort": false, 00:16:54.230 "seek_hole": false, 00:16:54.230 "seek_data": false, 00:16:54.230 "copy": false, 00:16:54.230 "nvme_iov_md": false 00:16:54.230 }, 00:16:54.230 "memory_domains": [ 00:16:54.230 { 00:16:54.230 "dma_device_id": "system", 00:16:54.230 "dma_device_type": 1 00:16:54.230 }, 00:16:54.230 { 00:16:54.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.230 "dma_device_type": 2 00:16:54.230 }, 00:16:54.230 { 00:16:54.230 "dma_device_id": "system", 00:16:54.230 "dma_device_type": 1 00:16:54.230 }, 00:16:54.230 { 00:16:54.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.230 "dma_device_type": 2 00:16:54.230 } 00:16:54.230 ], 00:16:54.230 "driver_specific": { 00:16:54.230 "raid": { 00:16:54.230 "uuid": "615eb021-672c-4332-bc06-64de685646cc", 00:16:54.230 "strip_size_kb": 0, 00:16:54.230 "state": "online", 00:16:54.230 "raid_level": "raid1", 00:16:54.230 "superblock": true, 00:16:54.230 "num_base_bdevs": 2, 00:16:54.230 "num_base_bdevs_discovered": 2, 00:16:54.230 "num_base_bdevs_operational": 2, 00:16:54.230 "base_bdevs_list": [ 00:16:54.230 { 00:16:54.230 "name": "pt1", 00:16:54.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.230 "is_configured": true, 00:16:54.230 "data_offset": 256, 00:16:54.230 "data_size": 7936 00:16:54.230 }, 00:16:54.230 { 00:16:54.230 "name": "pt2", 00:16:54.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.230 "is_configured": true, 00:16:54.230 "data_offset": 256, 00:16:54.230 "data_size": 7936 00:16:54.230 } 00:16:54.230 ] 00:16:54.230 } 00:16:54.230 } 00:16:54.230 }' 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:54.230 pt2' 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.230 16:42:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.230 [2024-12-07 16:42:53.097328] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.230 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=615eb021-672c-4332-bc06-64de685646cc 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 615eb021-672c-4332-bc06-64de685646cc ']' 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.492 [2024-12-07 16:42:53.136990] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.492 [2024-12-07 16:42:53.137119] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.492 [2024-12-07 16:42:53.137263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.492 [2024-12-07 16:42:53.137385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.492 [2024-12-07 16:42:53.137437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:54.492 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.493 [2024-12-07 16:42:53.284774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:54.493 [2024-12-07 16:42:53.287046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:54.493 [2024-12-07 16:42:53.287178] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:54.493 [2024-12-07 16:42:53.287286] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:54.493 [2024-12-07 16:42:53.287332] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.493 [2024-12-07 16:42:53.287396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:54.493 request: 00:16:54.493 { 00:16:54.493 "name": "raid_bdev1", 00:16:54.493 "raid_level": "raid1", 00:16:54.493 "base_bdevs": [ 00:16:54.493 "malloc1", 00:16:54.493 "malloc2" 00:16:54.493 ], 00:16:54.493 "superblock": false, 00:16:54.493 "method": "bdev_raid_create", 00:16:54.493 "req_id": 1 00:16:54.493 } 00:16:54.493 Got JSON-RPC error response 00:16:54.493 response: 00:16:54.493 { 00:16:54.493 "code": -17, 00:16:54.493 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:54.493 } 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.493 [2024-12-07 16:42:53.356564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:54.493 [2024-12-07 16:42:53.356744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.493 [2024-12-07 16:42:53.356788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:54.493 [2024-12-07 16:42:53.356820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.493 [2024-12-07 16:42:53.359266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.493 [2024-12-07 16:42:53.359356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:54.493 [2024-12-07 16:42:53.359484] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:54.493 [2024-12-07 16:42:53.359560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:54.493 pt1 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.493 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.751 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.751 "name": "raid_bdev1", 00:16:54.751 "uuid": "615eb021-672c-4332-bc06-64de685646cc", 00:16:54.751 "strip_size_kb": 0, 00:16:54.751 "state": "configuring", 00:16:54.751 "raid_level": "raid1", 00:16:54.751 "superblock": true, 00:16:54.751 "num_base_bdevs": 2, 00:16:54.751 "num_base_bdevs_discovered": 1, 00:16:54.751 "num_base_bdevs_operational": 2, 00:16:54.751 "base_bdevs_list": [ 00:16:54.751 { 00:16:54.751 "name": "pt1", 00:16:54.751 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.751 "is_configured": true, 00:16:54.751 "data_offset": 256, 00:16:54.751 "data_size": 7936 00:16:54.751 }, 00:16:54.751 { 00:16:54.751 "name": null, 00:16:54.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.751 "is_configured": false, 00:16:54.751 "data_offset": 256, 00:16:54.751 "data_size": 7936 00:16:54.751 } 00:16:54.751 ] 00:16:54.751 }' 00:16:54.751 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.751 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.010 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:55.010 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:55.010 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:55.010 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:55.010 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.010 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.010 [2024-12-07 16:42:53.819811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:55.010 [2024-12-07 16:42:53.819996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.010 [2024-12-07 16:42:53.820044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:55.010 [2024-12-07 16:42:53.820073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.010 [2024-12-07 16:42:53.820336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.010 [2024-12-07 16:42:53.820389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:55.010 [2024-12-07 16:42:53.820489] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:55.010 [2024-12-07 16:42:53.820543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:55.010 [2024-12-07 16:42:53.820684] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:55.010 [2024-12-07 16:42:53.820716] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:55.010 [2024-12-07 16:42:53.820839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:55.010 [2024-12-07 16:42:53.820935] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:55.010 [2024-12-07 16:42:53.820973] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:55.011 [2024-12-07 16:42:53.821078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.011 pt2 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.011 "name": "raid_bdev1", 00:16:55.011 "uuid": "615eb021-672c-4332-bc06-64de685646cc", 00:16:55.011 "strip_size_kb": 0, 00:16:55.011 "state": "online", 00:16:55.011 "raid_level": "raid1", 00:16:55.011 "superblock": true, 00:16:55.011 "num_base_bdevs": 2, 00:16:55.011 "num_base_bdevs_discovered": 2, 00:16:55.011 "num_base_bdevs_operational": 2, 00:16:55.011 "base_bdevs_list": [ 00:16:55.011 { 00:16:55.011 "name": "pt1", 00:16:55.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.011 "is_configured": true, 00:16:55.011 "data_offset": 256, 00:16:55.011 "data_size": 7936 00:16:55.011 }, 00:16:55.011 { 00:16:55.011 "name": "pt2", 00:16:55.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.011 "is_configured": true, 00:16:55.011 "data_offset": 256, 00:16:55.011 "data_size": 7936 00:16:55.011 } 00:16:55.011 ] 00:16:55.011 }' 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.011 16:42:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.580 [2024-12-07 16:42:54.287482] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:55.580 "name": "raid_bdev1", 00:16:55.580 "aliases": [ 00:16:55.580 "615eb021-672c-4332-bc06-64de685646cc" 00:16:55.580 ], 00:16:55.580 "product_name": "Raid Volume", 00:16:55.580 "block_size": 4128, 00:16:55.580 "num_blocks": 7936, 00:16:55.580 "uuid": "615eb021-672c-4332-bc06-64de685646cc", 00:16:55.580 "md_size": 32, 00:16:55.580 "md_interleave": true, 00:16:55.580 "dif_type": 0, 00:16:55.580 "assigned_rate_limits": { 00:16:55.580 "rw_ios_per_sec": 0, 00:16:55.580 "rw_mbytes_per_sec": 0, 00:16:55.580 "r_mbytes_per_sec": 0, 00:16:55.580 "w_mbytes_per_sec": 0 00:16:55.580 }, 00:16:55.580 "claimed": false, 00:16:55.580 "zoned": false, 00:16:55.580 "supported_io_types": { 00:16:55.580 "read": true, 00:16:55.580 "write": true, 00:16:55.580 "unmap": false, 00:16:55.580 "flush": false, 00:16:55.580 "reset": true, 00:16:55.580 "nvme_admin": false, 00:16:55.580 "nvme_io": false, 00:16:55.580 "nvme_io_md": false, 00:16:55.580 "write_zeroes": true, 00:16:55.580 "zcopy": false, 00:16:55.580 "get_zone_info": false, 00:16:55.580 "zone_management": false, 00:16:55.580 "zone_append": false, 00:16:55.580 "compare": false, 00:16:55.580 "compare_and_write": false, 00:16:55.580 "abort": false, 00:16:55.580 "seek_hole": false, 00:16:55.580 "seek_data": false, 00:16:55.580 "copy": false, 00:16:55.580 "nvme_iov_md": false 00:16:55.580 }, 00:16:55.580 "memory_domains": [ 00:16:55.580 { 00:16:55.580 "dma_device_id": "system", 00:16:55.580 "dma_device_type": 1 00:16:55.580 }, 00:16:55.580 { 00:16:55.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.580 "dma_device_type": 2 00:16:55.580 }, 00:16:55.580 { 00:16:55.580 "dma_device_id": "system", 00:16:55.580 "dma_device_type": 1 00:16:55.580 }, 00:16:55.580 { 00:16:55.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.580 "dma_device_type": 2 00:16:55.580 } 00:16:55.580 ], 00:16:55.580 "driver_specific": { 00:16:55.580 "raid": { 00:16:55.580 "uuid": "615eb021-672c-4332-bc06-64de685646cc", 00:16:55.580 "strip_size_kb": 0, 00:16:55.580 "state": "online", 00:16:55.580 "raid_level": "raid1", 00:16:55.580 "superblock": true, 00:16:55.580 "num_base_bdevs": 2, 00:16:55.580 "num_base_bdevs_discovered": 2, 00:16:55.580 "num_base_bdevs_operational": 2, 00:16:55.580 "base_bdevs_list": [ 00:16:55.580 { 00:16:55.580 "name": "pt1", 00:16:55.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.580 "is_configured": true, 00:16:55.580 "data_offset": 256, 00:16:55.580 "data_size": 7936 00:16:55.580 }, 00:16:55.580 { 00:16:55.580 "name": "pt2", 00:16:55.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.580 "is_configured": true, 00:16:55.580 "data_offset": 256, 00:16:55.580 "data_size": 7936 00:16:55.580 } 00:16:55.580 ] 00:16:55.580 } 00:16:55.580 } 00:16:55.580 }' 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:55.580 pt2' 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.580 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.581 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.581 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:55.581 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:55.581 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.581 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.581 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:55.581 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.581 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.581 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.839 [2024-12-07 16:42:54.495084] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 615eb021-672c-4332-bc06-64de685646cc '!=' 615eb021-672c-4332-bc06-64de685646cc ']' 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.839 [2024-12-07 16:42:54.538778] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.839 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.840 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.840 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.840 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.840 "name": "raid_bdev1", 00:16:55.840 "uuid": "615eb021-672c-4332-bc06-64de685646cc", 00:16:55.840 "strip_size_kb": 0, 00:16:55.840 "state": "online", 00:16:55.840 "raid_level": "raid1", 00:16:55.840 "superblock": true, 00:16:55.840 "num_base_bdevs": 2, 00:16:55.840 "num_base_bdevs_discovered": 1, 00:16:55.840 "num_base_bdevs_operational": 1, 00:16:55.840 "base_bdevs_list": [ 00:16:55.840 { 00:16:55.840 "name": null, 00:16:55.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.840 "is_configured": false, 00:16:55.840 "data_offset": 0, 00:16:55.840 "data_size": 7936 00:16:55.840 }, 00:16:55.840 { 00:16:55.840 "name": "pt2", 00:16:55.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.840 "is_configured": true, 00:16:55.840 "data_offset": 256, 00:16:55.840 "data_size": 7936 00:16:55.840 } 00:16:55.840 ] 00:16:55.840 }' 00:16:55.840 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.840 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.098 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:56.098 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.098 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.098 [2024-12-07 16:42:54.985903] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:56.098 [2024-12-07 16:42:54.986041] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.098 [2024-12-07 16:42:54.986176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.098 [2024-12-07 16:42:54.986255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.098 [2024-12-07 16:42:54.986305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:56.098 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.365 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:56.365 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.365 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.365 16:42:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.365 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.365 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:56.365 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:56.365 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:56.365 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:56.365 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:56.365 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.366 [2024-12-07 16:42:55.065743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.366 [2024-12-07 16:42:55.065919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.366 [2024-12-07 16:42:55.065961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:56.366 [2024-12-07 16:42:55.065989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.366 [2024-12-07 16:42:55.068408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.366 [2024-12-07 16:42:55.068488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.366 [2024-12-07 16:42:55.068591] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:56.366 [2024-12-07 16:42:55.068658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:56.366 [2024-12-07 16:42:55.068740] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:56.366 [2024-12-07 16:42:55.068750] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:56.366 [2024-12-07 16:42:55.068864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:56.366 [2024-12-07 16:42:55.068928] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:56.366 [2024-12-07 16:42:55.068939] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:56.366 [2024-12-07 16:42:55.069023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.366 pt2 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.366 "name": "raid_bdev1", 00:16:56.366 "uuid": "615eb021-672c-4332-bc06-64de685646cc", 00:16:56.366 "strip_size_kb": 0, 00:16:56.366 "state": "online", 00:16:56.366 "raid_level": "raid1", 00:16:56.366 "superblock": true, 00:16:56.366 "num_base_bdevs": 2, 00:16:56.366 "num_base_bdevs_discovered": 1, 00:16:56.366 "num_base_bdevs_operational": 1, 00:16:56.366 "base_bdevs_list": [ 00:16:56.366 { 00:16:56.366 "name": null, 00:16:56.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.366 "is_configured": false, 00:16:56.366 "data_offset": 256, 00:16:56.366 "data_size": 7936 00:16:56.366 }, 00:16:56.366 { 00:16:56.366 "name": "pt2", 00:16:56.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.366 "is_configured": true, 00:16:56.366 "data_offset": 256, 00:16:56.366 "data_size": 7936 00:16:56.366 } 00:16:56.366 ] 00:16:56.366 }' 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.366 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.627 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:56.627 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.627 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.627 [2024-12-07 16:42:55.489048] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:56.627 [2024-12-07 16:42:55.489177] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.627 [2024-12-07 16:42:55.489303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.627 [2024-12-07 16:42:55.489388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.627 [2024-12-07 16:42:55.489435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:56.627 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.627 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.627 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:56.627 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.627 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.627 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.886 [2024-12-07 16:42:55.540967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:56.886 [2024-12-07 16:42:55.541161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.886 [2024-12-07 16:42:55.541205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:56.886 [2024-12-07 16:42:55.541249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.886 [2024-12-07 16:42:55.543719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.886 [2024-12-07 16:42:55.543812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:56.886 [2024-12-07 16:42:55.543932] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:56.886 [2024-12-07 16:42:55.544007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:56.886 [2024-12-07 16:42:55.544136] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:56.886 [2024-12-07 16:42:55.544191] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:56.886 [2024-12-07 16:42:55.544253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:56.886 [2024-12-07 16:42:55.544336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:56.886 [2024-12-07 16:42:55.544466] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:56.886 [2024-12-07 16:42:55.544510] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:56.886 [2024-12-07 16:42:55.544620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:56.886 [2024-12-07 16:42:55.544731] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:56.886 [2024-12-07 16:42:55.544765] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:56.886 [2024-12-07 16:42:55.544916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.886 pt1 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.886 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:56.887 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.887 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.887 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.887 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.887 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.887 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.887 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.887 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.887 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.887 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.887 "name": "raid_bdev1", 00:16:56.887 "uuid": "615eb021-672c-4332-bc06-64de685646cc", 00:16:56.887 "strip_size_kb": 0, 00:16:56.887 "state": "online", 00:16:56.887 "raid_level": "raid1", 00:16:56.887 "superblock": true, 00:16:56.887 "num_base_bdevs": 2, 00:16:56.887 "num_base_bdevs_discovered": 1, 00:16:56.887 "num_base_bdevs_operational": 1, 00:16:56.887 "base_bdevs_list": [ 00:16:56.887 { 00:16:56.887 "name": null, 00:16:56.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.887 "is_configured": false, 00:16:56.887 "data_offset": 256, 00:16:56.887 "data_size": 7936 00:16:56.887 }, 00:16:56.887 { 00:16:56.887 "name": "pt2", 00:16:56.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.887 "is_configured": true, 00:16:56.887 "data_offset": 256, 00:16:56.887 "data_size": 7936 00:16:56.887 } 00:16:56.887 ] 00:16:56.887 }' 00:16:56.887 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.887 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.145 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:57.146 16:42:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:57.146 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.146 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.146 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.146 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:57.146 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:57.146 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.146 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.146 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:57.146 [2024-12-07 16:42:56.036462] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.404 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.404 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 615eb021-672c-4332-bc06-64de685646cc '!=' 615eb021-672c-4332-bc06-64de685646cc ']' 00:16:57.404 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99370 00:16:57.404 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99370 ']' 00:16:57.404 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99370 00:16:57.404 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:57.404 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.404 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99370 00:16:57.404 killing process with pid 99370 00:16:57.404 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:57.404 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:57.404 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99370' 00:16:57.404 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 99370 00:16:57.404 [2024-12-07 16:42:56.124326] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:57.404 [2024-12-07 16:42:56.124458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.404 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 99370 00:16:57.404 [2024-12-07 16:42:56.124525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.404 [2024-12-07 16:42:56.124537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:57.404 [2024-12-07 16:42:56.168909] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:57.661 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:57.661 00:16:57.661 real 0m5.117s 00:16:57.661 user 0m8.068s 00:16:57.661 sys 0m1.237s 00:16:57.661 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.661 16:42:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.661 ************************************ 00:16:57.661 END TEST raid_superblock_test_md_interleaved 00:16:57.662 ************************************ 00:16:57.921 16:42:56 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:57.921 16:42:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:57.921 16:42:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:57.921 16:42:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:57.921 ************************************ 00:16:57.921 START TEST raid_rebuild_test_sb_md_interleaved 00:16:57.921 ************************************ 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99683 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99683 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99683 ']' 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.921 16:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.921 [2024-12-07 16:42:56.728026] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:57.921 [2024-12-07 16:42:56.728290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99683 ] 00:16:57.921 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:57.921 Zero copy mechanism will not be used. 00:16:58.180 [2024-12-07 16:42:56.876277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.180 [2024-12-07 16:42:56.957644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.180 [2024-12-07 16:42:57.035677] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.180 [2024-12-07 16:42:57.035840] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.748 BaseBdev1_malloc 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.748 [2024-12-07 16:42:57.612873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:58.748 [2024-12-07 16:42:57.613070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.748 [2024-12-07 16:42:57.613124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:58.748 [2024-12-07 16:42:57.613154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.748 [2024-12-07 16:42:57.615407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.748 [2024-12-07 16:42:57.615486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:58.748 BaseBdev1 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.748 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.008 BaseBdev2_malloc 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.008 [2024-12-07 16:42:57.658268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:59.008 [2024-12-07 16:42:57.658462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.008 [2024-12-07 16:42:57.658514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:59.008 [2024-12-07 16:42:57.658550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.008 [2024-12-07 16:42:57.661088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.008 [2024-12-07 16:42:57.661171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:59.008 BaseBdev2 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.008 spare_malloc 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.008 spare_delay 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.008 [2024-12-07 16:42:57.706214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:59.008 [2024-12-07 16:42:57.706426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.008 [2024-12-07 16:42:57.706481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:59.008 [2024-12-07 16:42:57.706521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.008 [2024-12-07 16:42:57.708856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.008 [2024-12-07 16:42:57.708935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:59.008 spare 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.008 [2024-12-07 16:42:57.718282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:59.008 [2024-12-07 16:42:57.720725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.008 [2024-12-07 16:42:57.720998] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:59.008 [2024-12-07 16:42:57.721015] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:59.008 [2024-12-07 16:42:57.721144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:59.008 [2024-12-07 16:42:57.721218] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:59.008 [2024-12-07 16:42:57.721229] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:59.008 [2024-12-07 16:42:57.721321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.008 "name": "raid_bdev1", 00:16:59.008 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:16:59.008 "strip_size_kb": 0, 00:16:59.008 "state": "online", 00:16:59.008 "raid_level": "raid1", 00:16:59.008 "superblock": true, 00:16:59.008 "num_base_bdevs": 2, 00:16:59.008 "num_base_bdevs_discovered": 2, 00:16:59.008 "num_base_bdevs_operational": 2, 00:16:59.008 "base_bdevs_list": [ 00:16:59.008 { 00:16:59.008 "name": "BaseBdev1", 00:16:59.008 "uuid": "d284778e-3f53-5622-aeae-43ebb0327a49", 00:16:59.008 "is_configured": true, 00:16:59.008 "data_offset": 256, 00:16:59.008 "data_size": 7936 00:16:59.008 }, 00:16:59.008 { 00:16:59.008 "name": "BaseBdev2", 00:16:59.008 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:16:59.008 "is_configured": true, 00:16:59.008 "data_offset": 256, 00:16:59.008 "data_size": 7936 00:16:59.008 } 00:16:59.008 ] 00:16:59.008 }' 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.008 16:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.268 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:59.268 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.268 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.268 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.268 [2024-12-07 16:42:58.129887] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.268 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.527 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:59.527 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:59.527 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.527 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.527 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.528 [2024-12-07 16:42:58.221365] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.528 "name": "raid_bdev1", 00:16:59.528 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:16:59.528 "strip_size_kb": 0, 00:16:59.528 "state": "online", 00:16:59.528 "raid_level": "raid1", 00:16:59.528 "superblock": true, 00:16:59.528 "num_base_bdevs": 2, 00:16:59.528 "num_base_bdevs_discovered": 1, 00:16:59.528 "num_base_bdevs_operational": 1, 00:16:59.528 "base_bdevs_list": [ 00:16:59.528 { 00:16:59.528 "name": null, 00:16:59.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.528 "is_configured": false, 00:16:59.528 "data_offset": 0, 00:16:59.528 "data_size": 7936 00:16:59.528 }, 00:16:59.528 { 00:16:59.528 "name": "BaseBdev2", 00:16:59.528 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:16:59.528 "is_configured": true, 00:16:59.528 "data_offset": 256, 00:16:59.528 "data_size": 7936 00:16:59.528 } 00:16:59.528 ] 00:16:59.528 }' 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.528 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.787 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:59.787 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.787 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.787 [2024-12-07 16:42:58.628677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:59.787 [2024-12-07 16:42:58.634063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:59.787 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.787 16:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:59.787 [2024-12-07 16:42:58.636422] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:01.167 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.167 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.167 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.167 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.167 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.167 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.167 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.167 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.167 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.167 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.167 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.167 "name": "raid_bdev1", 00:17:01.167 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:01.167 "strip_size_kb": 0, 00:17:01.167 "state": "online", 00:17:01.167 "raid_level": "raid1", 00:17:01.167 "superblock": true, 00:17:01.167 "num_base_bdevs": 2, 00:17:01.167 "num_base_bdevs_discovered": 2, 00:17:01.167 "num_base_bdevs_operational": 2, 00:17:01.167 "process": { 00:17:01.167 "type": "rebuild", 00:17:01.167 "target": "spare", 00:17:01.167 "progress": { 00:17:01.168 "blocks": 2560, 00:17:01.168 "percent": 32 00:17:01.168 } 00:17:01.168 }, 00:17:01.168 "base_bdevs_list": [ 00:17:01.168 { 00:17:01.168 "name": "spare", 00:17:01.168 "uuid": "d8a935e8-f245-531b-af14-d1f209f9c267", 00:17:01.168 "is_configured": true, 00:17:01.168 "data_offset": 256, 00:17:01.168 "data_size": 7936 00:17:01.168 }, 00:17:01.168 { 00:17:01.168 "name": "BaseBdev2", 00:17:01.168 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:01.168 "is_configured": true, 00:17:01.168 "data_offset": 256, 00:17:01.168 "data_size": 7936 00:17:01.168 } 00:17:01.168 ] 00:17:01.168 }' 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.168 [2024-12-07 16:42:59.788933] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.168 [2024-12-07 16:42:59.846818] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:01.168 [2024-12-07 16:42:59.847046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.168 [2024-12-07 16:42:59.847088] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.168 [2024-12-07 16:42:59.847110] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.168 "name": "raid_bdev1", 00:17:01.168 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:01.168 "strip_size_kb": 0, 00:17:01.168 "state": "online", 00:17:01.168 "raid_level": "raid1", 00:17:01.168 "superblock": true, 00:17:01.168 "num_base_bdevs": 2, 00:17:01.168 "num_base_bdevs_discovered": 1, 00:17:01.168 "num_base_bdevs_operational": 1, 00:17:01.168 "base_bdevs_list": [ 00:17:01.168 { 00:17:01.168 "name": null, 00:17:01.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.168 "is_configured": false, 00:17:01.168 "data_offset": 0, 00:17:01.168 "data_size": 7936 00:17:01.168 }, 00:17:01.168 { 00:17:01.168 "name": "BaseBdev2", 00:17:01.168 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:01.168 "is_configured": true, 00:17:01.168 "data_offset": 256, 00:17:01.168 "data_size": 7936 00:17:01.168 } 00:17:01.168 ] 00:17:01.168 }' 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.168 16:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.427 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.427 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.427 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.427 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.427 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.428 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.428 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.428 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.428 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.428 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.686 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.686 "name": "raid_bdev1", 00:17:01.686 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:01.686 "strip_size_kb": 0, 00:17:01.686 "state": "online", 00:17:01.686 "raid_level": "raid1", 00:17:01.686 "superblock": true, 00:17:01.686 "num_base_bdevs": 2, 00:17:01.686 "num_base_bdevs_discovered": 1, 00:17:01.686 "num_base_bdevs_operational": 1, 00:17:01.686 "base_bdevs_list": [ 00:17:01.686 { 00:17:01.686 "name": null, 00:17:01.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.686 "is_configured": false, 00:17:01.686 "data_offset": 0, 00:17:01.686 "data_size": 7936 00:17:01.686 }, 00:17:01.686 { 00:17:01.686 "name": "BaseBdev2", 00:17:01.686 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:01.686 "is_configured": true, 00:17:01.686 "data_offset": 256, 00:17:01.686 "data_size": 7936 00:17:01.686 } 00:17:01.686 ] 00:17:01.686 }' 00:17:01.686 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.686 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.686 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.686 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.686 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:01.686 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.686 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.686 [2024-12-07 16:43:00.449164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:01.686 [2024-12-07 16:43:00.454574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:01.686 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.687 16:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:01.687 [2024-12-07 16:43:00.456896] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:02.622 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.622 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.622 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.622 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.622 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.622 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.622 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.622 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.622 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.622 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.622 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.622 "name": "raid_bdev1", 00:17:02.622 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:02.622 "strip_size_kb": 0, 00:17:02.622 "state": "online", 00:17:02.622 "raid_level": "raid1", 00:17:02.622 "superblock": true, 00:17:02.622 "num_base_bdevs": 2, 00:17:02.622 "num_base_bdevs_discovered": 2, 00:17:02.622 "num_base_bdevs_operational": 2, 00:17:02.622 "process": { 00:17:02.622 "type": "rebuild", 00:17:02.622 "target": "spare", 00:17:02.622 "progress": { 00:17:02.622 "blocks": 2560, 00:17:02.622 "percent": 32 00:17:02.622 } 00:17:02.622 }, 00:17:02.622 "base_bdevs_list": [ 00:17:02.622 { 00:17:02.622 "name": "spare", 00:17:02.622 "uuid": "d8a935e8-f245-531b-af14-d1f209f9c267", 00:17:02.622 "is_configured": true, 00:17:02.623 "data_offset": 256, 00:17:02.623 "data_size": 7936 00:17:02.623 }, 00:17:02.623 { 00:17:02.623 "name": "BaseBdev2", 00:17:02.623 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:02.623 "is_configured": true, 00:17:02.623 "data_offset": 256, 00:17:02.623 "data_size": 7936 00:17:02.623 } 00:17:02.623 ] 00:17:02.623 }' 00:17:02.623 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:02.882 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=633 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.882 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.882 "name": "raid_bdev1", 00:17:02.882 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:02.882 "strip_size_kb": 0, 00:17:02.882 "state": "online", 00:17:02.882 "raid_level": "raid1", 00:17:02.882 "superblock": true, 00:17:02.882 "num_base_bdevs": 2, 00:17:02.882 "num_base_bdevs_discovered": 2, 00:17:02.882 "num_base_bdevs_operational": 2, 00:17:02.882 "process": { 00:17:02.882 "type": "rebuild", 00:17:02.882 "target": "spare", 00:17:02.882 "progress": { 00:17:02.882 "blocks": 2816, 00:17:02.882 "percent": 35 00:17:02.882 } 00:17:02.882 }, 00:17:02.882 "base_bdevs_list": [ 00:17:02.882 { 00:17:02.882 "name": "spare", 00:17:02.882 "uuid": "d8a935e8-f245-531b-af14-d1f209f9c267", 00:17:02.882 "is_configured": true, 00:17:02.882 "data_offset": 256, 00:17:02.882 "data_size": 7936 00:17:02.882 }, 00:17:02.882 { 00:17:02.882 "name": "BaseBdev2", 00:17:02.883 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:02.883 "is_configured": true, 00:17:02.883 "data_offset": 256, 00:17:02.883 "data_size": 7936 00:17:02.883 } 00:17:02.883 ] 00:17:02.883 }' 00:17:02.883 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.883 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.883 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.883 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.883 16:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.263 "name": "raid_bdev1", 00:17:04.263 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:04.263 "strip_size_kb": 0, 00:17:04.263 "state": "online", 00:17:04.263 "raid_level": "raid1", 00:17:04.263 "superblock": true, 00:17:04.263 "num_base_bdevs": 2, 00:17:04.263 "num_base_bdevs_discovered": 2, 00:17:04.263 "num_base_bdevs_operational": 2, 00:17:04.263 "process": { 00:17:04.263 "type": "rebuild", 00:17:04.263 "target": "spare", 00:17:04.263 "progress": { 00:17:04.263 "blocks": 5632, 00:17:04.263 "percent": 70 00:17:04.263 } 00:17:04.263 }, 00:17:04.263 "base_bdevs_list": [ 00:17:04.263 { 00:17:04.263 "name": "spare", 00:17:04.263 "uuid": "d8a935e8-f245-531b-af14-d1f209f9c267", 00:17:04.263 "is_configured": true, 00:17:04.263 "data_offset": 256, 00:17:04.263 "data_size": 7936 00:17:04.263 }, 00:17:04.263 { 00:17:04.263 "name": "BaseBdev2", 00:17:04.263 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:04.263 "is_configured": true, 00:17:04.263 "data_offset": 256, 00:17:04.263 "data_size": 7936 00:17:04.263 } 00:17:04.263 ] 00:17:04.263 }' 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.263 16:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.833 [2024-12-07 16:43:03.582450] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:04.833 [2024-12-07 16:43:03.582695] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:04.833 [2024-12-07 16:43:03.582858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.093 "name": "raid_bdev1", 00:17:05.093 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:05.093 "strip_size_kb": 0, 00:17:05.093 "state": "online", 00:17:05.093 "raid_level": "raid1", 00:17:05.093 "superblock": true, 00:17:05.093 "num_base_bdevs": 2, 00:17:05.093 "num_base_bdevs_discovered": 2, 00:17:05.093 "num_base_bdevs_operational": 2, 00:17:05.093 "base_bdevs_list": [ 00:17:05.093 { 00:17:05.093 "name": "spare", 00:17:05.093 "uuid": "d8a935e8-f245-531b-af14-d1f209f9c267", 00:17:05.093 "is_configured": true, 00:17:05.093 "data_offset": 256, 00:17:05.093 "data_size": 7936 00:17:05.093 }, 00:17:05.093 { 00:17:05.093 "name": "BaseBdev2", 00:17:05.093 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:05.093 "is_configured": true, 00:17:05.093 "data_offset": 256, 00:17:05.093 "data_size": 7936 00:17:05.093 } 00:17:05.093 ] 00:17:05.093 }' 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:05.093 16:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.353 "name": "raid_bdev1", 00:17:05.353 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:05.353 "strip_size_kb": 0, 00:17:05.353 "state": "online", 00:17:05.353 "raid_level": "raid1", 00:17:05.353 "superblock": true, 00:17:05.353 "num_base_bdevs": 2, 00:17:05.353 "num_base_bdevs_discovered": 2, 00:17:05.353 "num_base_bdevs_operational": 2, 00:17:05.353 "base_bdevs_list": [ 00:17:05.353 { 00:17:05.353 "name": "spare", 00:17:05.353 "uuid": "d8a935e8-f245-531b-af14-d1f209f9c267", 00:17:05.353 "is_configured": true, 00:17:05.353 "data_offset": 256, 00:17:05.353 "data_size": 7936 00:17:05.353 }, 00:17:05.353 { 00:17:05.353 "name": "BaseBdev2", 00:17:05.353 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:05.353 "is_configured": true, 00:17:05.353 "data_offset": 256, 00:17:05.353 "data_size": 7936 00:17:05.353 } 00:17:05.353 ] 00:17:05.353 }' 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.353 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.354 "name": "raid_bdev1", 00:17:05.354 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:05.354 "strip_size_kb": 0, 00:17:05.354 "state": "online", 00:17:05.354 "raid_level": "raid1", 00:17:05.354 "superblock": true, 00:17:05.354 "num_base_bdevs": 2, 00:17:05.354 "num_base_bdevs_discovered": 2, 00:17:05.354 "num_base_bdevs_operational": 2, 00:17:05.354 "base_bdevs_list": [ 00:17:05.354 { 00:17:05.354 "name": "spare", 00:17:05.354 "uuid": "d8a935e8-f245-531b-af14-d1f209f9c267", 00:17:05.354 "is_configured": true, 00:17:05.354 "data_offset": 256, 00:17:05.354 "data_size": 7936 00:17:05.354 }, 00:17:05.354 { 00:17:05.354 "name": "BaseBdev2", 00:17:05.354 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:05.354 "is_configured": true, 00:17:05.354 "data_offset": 256, 00:17:05.354 "data_size": 7936 00:17:05.354 } 00:17:05.354 ] 00:17:05.354 }' 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.354 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.925 [2024-12-07 16:43:04.632009] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.925 [2024-12-07 16:43:04.632053] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.925 [2024-12-07 16:43:04.632185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.925 [2024-12-07 16:43:04.632262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.925 [2024-12-07 16:43:04.632276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.925 [2024-12-07 16:43:04.695869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:05.925 [2024-12-07 16:43:04.695965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.925 [2024-12-07 16:43:04.695991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:05.925 [2024-12-07 16:43:04.696004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.925 [2024-12-07 16:43:04.698383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.925 [2024-12-07 16:43:04.698428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:05.925 [2024-12-07 16:43:04.698507] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:05.925 [2024-12-07 16:43:04.698569] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.925 [2024-12-07 16:43:04.698690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:05.925 spare 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.925 [2024-12-07 16:43:04.798636] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:17:05.925 [2024-12-07 16:43:04.798711] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:05.925 [2024-12-07 16:43:04.798925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:05.925 [2024-12-07 16:43:04.799084] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:17:05.925 [2024-12-07 16:43:04.799102] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:17:05.925 [2024-12-07 16:43:04.799223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.925 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.186 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.186 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.186 "name": "raid_bdev1", 00:17:06.186 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:06.186 "strip_size_kb": 0, 00:17:06.186 "state": "online", 00:17:06.186 "raid_level": "raid1", 00:17:06.186 "superblock": true, 00:17:06.186 "num_base_bdevs": 2, 00:17:06.186 "num_base_bdevs_discovered": 2, 00:17:06.186 "num_base_bdevs_operational": 2, 00:17:06.186 "base_bdevs_list": [ 00:17:06.186 { 00:17:06.186 "name": "spare", 00:17:06.186 "uuid": "d8a935e8-f245-531b-af14-d1f209f9c267", 00:17:06.186 "is_configured": true, 00:17:06.186 "data_offset": 256, 00:17:06.186 "data_size": 7936 00:17:06.186 }, 00:17:06.186 { 00:17:06.186 "name": "BaseBdev2", 00:17:06.186 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:06.186 "is_configured": true, 00:17:06.186 "data_offset": 256, 00:17:06.186 "data_size": 7936 00:17:06.186 } 00:17:06.186 ] 00:17:06.186 }' 00:17:06.186 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.186 16:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.445 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.445 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.445 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:06.445 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:06.445 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.445 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.445 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.445 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.445 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.445 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.445 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.445 "name": "raid_bdev1", 00:17:06.445 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:06.445 "strip_size_kb": 0, 00:17:06.445 "state": "online", 00:17:06.445 "raid_level": "raid1", 00:17:06.445 "superblock": true, 00:17:06.445 "num_base_bdevs": 2, 00:17:06.445 "num_base_bdevs_discovered": 2, 00:17:06.445 "num_base_bdevs_operational": 2, 00:17:06.445 "base_bdevs_list": [ 00:17:06.445 { 00:17:06.445 "name": "spare", 00:17:06.445 "uuid": "d8a935e8-f245-531b-af14-d1f209f9c267", 00:17:06.445 "is_configured": true, 00:17:06.445 "data_offset": 256, 00:17:06.445 "data_size": 7936 00:17:06.445 }, 00:17:06.445 { 00:17:06.445 "name": "BaseBdev2", 00:17:06.445 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:06.445 "is_configured": true, 00:17:06.445 "data_offset": 256, 00:17:06.445 "data_size": 7936 00:17:06.445 } 00:17:06.445 ] 00:17:06.445 }' 00:17:06.445 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.704 [2024-12-07 16:43:05.462702] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.704 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.704 "name": "raid_bdev1", 00:17:06.704 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:06.705 "strip_size_kb": 0, 00:17:06.705 "state": "online", 00:17:06.705 "raid_level": "raid1", 00:17:06.705 "superblock": true, 00:17:06.705 "num_base_bdevs": 2, 00:17:06.705 "num_base_bdevs_discovered": 1, 00:17:06.705 "num_base_bdevs_operational": 1, 00:17:06.705 "base_bdevs_list": [ 00:17:06.705 { 00:17:06.705 "name": null, 00:17:06.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.705 "is_configured": false, 00:17:06.705 "data_offset": 0, 00:17:06.705 "data_size": 7936 00:17:06.705 }, 00:17:06.705 { 00:17:06.705 "name": "BaseBdev2", 00:17:06.705 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:06.705 "is_configured": true, 00:17:06.705 "data_offset": 256, 00:17:06.705 "data_size": 7936 00:17:06.705 } 00:17:06.705 ] 00:17:06.705 }' 00:17:06.705 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.705 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.274 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:07.274 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.274 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.274 [2024-12-07 16:43:05.913985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.274 [2024-12-07 16:43:05.914316] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:07.274 [2024-12-07 16:43:05.914416] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:07.274 [2024-12-07 16:43:05.914483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.274 [2024-12-07 16:43:05.919628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:07.274 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.274 16:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:07.274 [2024-12-07 16:43:05.921946] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:08.214 16:43:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.214 16:43:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.214 16:43:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.214 16:43:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.214 16:43:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.214 16:43:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.214 16:43:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.214 16:43:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.214 16:43:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.214 16:43:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.214 16:43:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.214 "name": "raid_bdev1", 00:17:08.214 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:08.214 "strip_size_kb": 0, 00:17:08.214 "state": "online", 00:17:08.214 "raid_level": "raid1", 00:17:08.214 "superblock": true, 00:17:08.214 "num_base_bdevs": 2, 00:17:08.214 "num_base_bdevs_discovered": 2, 00:17:08.214 "num_base_bdevs_operational": 2, 00:17:08.214 "process": { 00:17:08.214 "type": "rebuild", 00:17:08.214 "target": "spare", 00:17:08.214 "progress": { 00:17:08.214 "blocks": 2560, 00:17:08.214 "percent": 32 00:17:08.214 } 00:17:08.214 }, 00:17:08.214 "base_bdevs_list": [ 00:17:08.214 { 00:17:08.214 "name": "spare", 00:17:08.214 "uuid": "d8a935e8-f245-531b-af14-d1f209f9c267", 00:17:08.214 "is_configured": true, 00:17:08.214 "data_offset": 256, 00:17:08.214 "data_size": 7936 00:17:08.214 }, 00:17:08.214 { 00:17:08.214 "name": "BaseBdev2", 00:17:08.214 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:08.214 "is_configured": true, 00:17:08.214 "data_offset": 256, 00:17:08.214 "data_size": 7936 00:17:08.214 } 00:17:08.214 ] 00:17:08.214 }' 00:17:08.214 16:43:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.214 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.214 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.214 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.214 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:08.214 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.214 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.214 [2024-12-07 16:43:07.070892] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.474 [2024-12-07 16:43:07.131717] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:08.474 [2024-12-07 16:43:07.131937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.474 [2024-12-07 16:43:07.131982] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.474 [2024-12-07 16:43:07.132006] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.474 "name": "raid_bdev1", 00:17:08.474 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:08.474 "strip_size_kb": 0, 00:17:08.474 "state": "online", 00:17:08.474 "raid_level": "raid1", 00:17:08.474 "superblock": true, 00:17:08.474 "num_base_bdevs": 2, 00:17:08.474 "num_base_bdevs_discovered": 1, 00:17:08.474 "num_base_bdevs_operational": 1, 00:17:08.474 "base_bdevs_list": [ 00:17:08.474 { 00:17:08.474 "name": null, 00:17:08.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.474 "is_configured": false, 00:17:08.474 "data_offset": 0, 00:17:08.474 "data_size": 7936 00:17:08.474 }, 00:17:08.474 { 00:17:08.474 "name": "BaseBdev2", 00:17:08.474 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:08.474 "is_configured": true, 00:17:08.474 "data_offset": 256, 00:17:08.474 "data_size": 7936 00:17:08.474 } 00:17:08.474 ] 00:17:08.474 }' 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.474 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.734 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:08.734 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.734 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.734 [2024-12-07 16:43:07.594088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:08.734 [2024-12-07 16:43:07.594180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.734 [2024-12-07 16:43:07.594216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:08.734 [2024-12-07 16:43:07.594226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.735 [2024-12-07 16:43:07.594516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.735 [2024-12-07 16:43:07.594532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:08.735 [2024-12-07 16:43:07.594609] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:08.735 [2024-12-07 16:43:07.594622] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:08.735 [2024-12-07 16:43:07.594635] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:08.735 [2024-12-07 16:43:07.594658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.735 [2024-12-07 16:43:07.599751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:08.735 spare 00:17:08.735 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.735 16:43:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:08.735 [2024-12-07 16:43:07.601963] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.178 "name": "raid_bdev1", 00:17:10.178 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:10.178 "strip_size_kb": 0, 00:17:10.178 "state": "online", 00:17:10.178 "raid_level": "raid1", 00:17:10.178 "superblock": true, 00:17:10.178 "num_base_bdevs": 2, 00:17:10.178 "num_base_bdevs_discovered": 2, 00:17:10.178 "num_base_bdevs_operational": 2, 00:17:10.178 "process": { 00:17:10.178 "type": "rebuild", 00:17:10.178 "target": "spare", 00:17:10.178 "progress": { 00:17:10.178 "blocks": 2560, 00:17:10.178 "percent": 32 00:17:10.178 } 00:17:10.178 }, 00:17:10.178 "base_bdevs_list": [ 00:17:10.178 { 00:17:10.178 "name": "spare", 00:17:10.178 "uuid": "d8a935e8-f245-531b-af14-d1f209f9c267", 00:17:10.178 "is_configured": true, 00:17:10.178 "data_offset": 256, 00:17:10.178 "data_size": 7936 00:17:10.178 }, 00:17:10.178 { 00:17:10.178 "name": "BaseBdev2", 00:17:10.178 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:10.178 "is_configured": true, 00:17:10.178 "data_offset": 256, 00:17:10.178 "data_size": 7936 00:17:10.178 } 00:17:10.178 ] 00:17:10.178 }' 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.178 [2024-12-07 16:43:08.750868] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.178 [2024-12-07 16:43:08.811571] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:10.178 [2024-12-07 16:43:08.811692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.178 [2024-12-07 16:43:08.811708] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.178 [2024-12-07 16:43:08.811719] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.178 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.179 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.179 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.179 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.179 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.179 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.179 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.179 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.179 "name": "raid_bdev1", 00:17:10.179 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:10.179 "strip_size_kb": 0, 00:17:10.179 "state": "online", 00:17:10.179 "raid_level": "raid1", 00:17:10.179 "superblock": true, 00:17:10.179 "num_base_bdevs": 2, 00:17:10.179 "num_base_bdevs_discovered": 1, 00:17:10.179 "num_base_bdevs_operational": 1, 00:17:10.179 "base_bdevs_list": [ 00:17:10.179 { 00:17:10.179 "name": null, 00:17:10.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.179 "is_configured": false, 00:17:10.179 "data_offset": 0, 00:17:10.179 "data_size": 7936 00:17:10.179 }, 00:17:10.179 { 00:17:10.179 "name": "BaseBdev2", 00:17:10.179 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:10.179 "is_configured": true, 00:17:10.179 "data_offset": 256, 00:17:10.179 "data_size": 7936 00:17:10.179 } 00:17:10.179 ] 00:17:10.179 }' 00:17:10.179 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.179 16:43:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.471 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.471 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.471 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.471 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.471 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.471 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.471 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.471 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.471 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.471 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.471 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.471 "name": "raid_bdev1", 00:17:10.471 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:10.471 "strip_size_kb": 0, 00:17:10.471 "state": "online", 00:17:10.471 "raid_level": "raid1", 00:17:10.471 "superblock": true, 00:17:10.471 "num_base_bdevs": 2, 00:17:10.471 "num_base_bdevs_discovered": 1, 00:17:10.471 "num_base_bdevs_operational": 1, 00:17:10.471 "base_bdevs_list": [ 00:17:10.471 { 00:17:10.471 "name": null, 00:17:10.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.471 "is_configured": false, 00:17:10.471 "data_offset": 0, 00:17:10.471 "data_size": 7936 00:17:10.471 }, 00:17:10.471 { 00:17:10.471 "name": "BaseBdev2", 00:17:10.471 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:10.471 "is_configured": true, 00:17:10.471 "data_offset": 256, 00:17:10.471 "data_size": 7936 00:17:10.471 } 00:17:10.471 ] 00:17:10.471 }' 00:17:10.471 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.747 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.747 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.747 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.747 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:10.747 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.747 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.747 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.747 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:10.747 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.747 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.747 [2024-12-07 16:43:09.437456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:10.747 [2024-12-07 16:43:09.437543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.748 [2024-12-07 16:43:09.437569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:10.748 [2024-12-07 16:43:09.437582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.748 [2024-12-07 16:43:09.437798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.748 [2024-12-07 16:43:09.437812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:10.748 [2024-12-07 16:43:09.437872] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:10.748 [2024-12-07 16:43:09.437901] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:10.748 [2024-12-07 16:43:09.437910] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:10.748 [2024-12-07 16:43:09.437929] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:10.748 BaseBdev1 00:17:10.748 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.748 16:43:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:11.684 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:11.684 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.685 "name": "raid_bdev1", 00:17:11.685 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:11.685 "strip_size_kb": 0, 00:17:11.685 "state": "online", 00:17:11.685 "raid_level": "raid1", 00:17:11.685 "superblock": true, 00:17:11.685 "num_base_bdevs": 2, 00:17:11.685 "num_base_bdevs_discovered": 1, 00:17:11.685 "num_base_bdevs_operational": 1, 00:17:11.685 "base_bdevs_list": [ 00:17:11.685 { 00:17:11.685 "name": null, 00:17:11.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.685 "is_configured": false, 00:17:11.685 "data_offset": 0, 00:17:11.685 "data_size": 7936 00:17:11.685 }, 00:17:11.685 { 00:17:11.685 "name": "BaseBdev2", 00:17:11.685 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:11.685 "is_configured": true, 00:17:11.685 "data_offset": 256, 00:17:11.685 "data_size": 7936 00:17:11.685 } 00:17:11.685 ] 00:17:11.685 }' 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.685 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.255 "name": "raid_bdev1", 00:17:12.255 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:12.255 "strip_size_kb": 0, 00:17:12.255 "state": "online", 00:17:12.255 "raid_level": "raid1", 00:17:12.255 "superblock": true, 00:17:12.255 "num_base_bdevs": 2, 00:17:12.255 "num_base_bdevs_discovered": 1, 00:17:12.255 "num_base_bdevs_operational": 1, 00:17:12.255 "base_bdevs_list": [ 00:17:12.255 { 00:17:12.255 "name": null, 00:17:12.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.255 "is_configured": false, 00:17:12.255 "data_offset": 0, 00:17:12.255 "data_size": 7936 00:17:12.255 }, 00:17:12.255 { 00:17:12.255 "name": "BaseBdev2", 00:17:12.255 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:12.255 "is_configured": true, 00:17:12.255 "data_offset": 256, 00:17:12.255 "data_size": 7936 00:17:12.255 } 00:17:12.255 ] 00:17:12.255 }' 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.255 16:43:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.255 [2024-12-07 16:43:11.062813] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:12.255 [2024-12-07 16:43:11.063105] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:12.255 [2024-12-07 16:43:11.063160] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:12.255 request: 00:17:12.255 { 00:17:12.255 "base_bdev": "BaseBdev1", 00:17:12.255 "raid_bdev": "raid_bdev1", 00:17:12.255 "method": "bdev_raid_add_base_bdev", 00:17:12.255 "req_id": 1 00:17:12.255 } 00:17:12.255 Got JSON-RPC error response 00:17:12.255 response: 00:17:12.255 { 00:17:12.255 "code": -22, 00:17:12.255 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:12.255 } 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:12.255 16:43:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.194 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.453 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.453 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.453 "name": "raid_bdev1", 00:17:13.453 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:13.453 "strip_size_kb": 0, 00:17:13.453 "state": "online", 00:17:13.453 "raid_level": "raid1", 00:17:13.453 "superblock": true, 00:17:13.453 "num_base_bdevs": 2, 00:17:13.453 "num_base_bdevs_discovered": 1, 00:17:13.453 "num_base_bdevs_operational": 1, 00:17:13.453 "base_bdevs_list": [ 00:17:13.453 { 00:17:13.454 "name": null, 00:17:13.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.454 "is_configured": false, 00:17:13.454 "data_offset": 0, 00:17:13.454 "data_size": 7936 00:17:13.454 }, 00:17:13.454 { 00:17:13.454 "name": "BaseBdev2", 00:17:13.454 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:13.454 "is_configured": true, 00:17:13.454 "data_offset": 256, 00:17:13.454 "data_size": 7936 00:17:13.454 } 00:17:13.454 ] 00:17:13.454 }' 00:17:13.454 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.454 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.713 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.713 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.713 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.713 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.713 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.713 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.713 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.713 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.713 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.713 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.713 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.713 "name": "raid_bdev1", 00:17:13.713 "uuid": "10cbc035-70d1-460b-836d-5511bf53f91f", 00:17:13.713 "strip_size_kb": 0, 00:17:13.713 "state": "online", 00:17:13.713 "raid_level": "raid1", 00:17:13.713 "superblock": true, 00:17:13.713 "num_base_bdevs": 2, 00:17:13.713 "num_base_bdevs_discovered": 1, 00:17:13.713 "num_base_bdevs_operational": 1, 00:17:13.713 "base_bdevs_list": [ 00:17:13.713 { 00:17:13.713 "name": null, 00:17:13.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.714 "is_configured": false, 00:17:13.714 "data_offset": 0, 00:17:13.714 "data_size": 7936 00:17:13.714 }, 00:17:13.714 { 00:17:13.714 "name": "BaseBdev2", 00:17:13.714 "uuid": "315f31a6-922a-5f01-8bb6-54f274a6c1fa", 00:17:13.714 "is_configured": true, 00:17:13.714 "data_offset": 256, 00:17:13.714 "data_size": 7936 00:17:13.714 } 00:17:13.714 ] 00:17:13.714 }' 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99683 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99683 ']' 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99683 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99683 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99683' 00:17:13.714 killing process with pid 99683 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99683 00:17:13.714 Received shutdown signal, test time was about 60.000000 seconds 00:17:13.714 00:17:13.714 Latency(us) 00:17:13.714 [2024-12-07T16:43:12.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.714 [2024-12-07T16:43:12.613Z] =================================================================================================================== 00:17:13.714 [2024-12-07T16:43:12.613Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:13.714 [2024-12-07 16:43:12.596829] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:13.714 16:43:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99683 00:17:13.714 [2024-12-07 16:43:12.596991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.714 [2024-12-07 16:43:12.597052] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.714 [2024-12-07 16:43:12.597062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:17:13.974 [2024-12-07 16:43:12.660545] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:14.233 16:43:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:14.233 00:17:14.233 real 0m16.395s 00:17:14.233 user 0m21.646s 00:17:14.233 sys 0m1.871s 00:17:14.233 16:43:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:14.233 16:43:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.233 ************************************ 00:17:14.233 END TEST raid_rebuild_test_sb_md_interleaved 00:17:14.233 ************************************ 00:17:14.233 16:43:13 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:14.233 16:43:13 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:14.233 16:43:13 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99683 ']' 00:17:14.233 16:43:13 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99683 00:17:14.233 16:43:13 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:14.233 00:17:14.233 real 10m14.168s 00:17:14.233 user 14m19.064s 00:17:14.233 sys 1m58.967s 00:17:14.233 16:43:13 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:14.233 ************************************ 00:17:14.233 END TEST bdev_raid 00:17:14.233 ************************************ 00:17:14.233 16:43:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:14.492 16:43:13 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:14.492 16:43:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:14.492 16:43:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:14.492 16:43:13 -- common/autotest_common.sh@10 -- # set +x 00:17:14.492 ************************************ 00:17:14.492 START TEST spdkcli_raid 00:17:14.492 ************************************ 00:17:14.492 16:43:13 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:14.492 * Looking for test storage... 00:17:14.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:14.492 16:43:13 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:14.492 16:43:13 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:14.492 16:43:13 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:14.492 16:43:13 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:14.492 16:43:13 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.492 16:43:13 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.492 16:43:13 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.493 16:43:13 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:14.753 16:43:13 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.753 16:43:13 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:14.753 16:43:13 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:14.753 16:43:13 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.753 16:43:13 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:14.753 16:43:13 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.753 16:43:13 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.753 16:43:13 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.753 16:43:13 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:14.753 16:43:13 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.753 16:43:13 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:14.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.753 --rc genhtml_branch_coverage=1 00:17:14.753 --rc genhtml_function_coverage=1 00:17:14.753 --rc genhtml_legend=1 00:17:14.753 --rc geninfo_all_blocks=1 00:17:14.753 --rc geninfo_unexecuted_blocks=1 00:17:14.753 00:17:14.753 ' 00:17:14.753 16:43:13 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:14.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.753 --rc genhtml_branch_coverage=1 00:17:14.753 --rc genhtml_function_coverage=1 00:17:14.753 --rc genhtml_legend=1 00:17:14.753 --rc geninfo_all_blocks=1 00:17:14.753 --rc geninfo_unexecuted_blocks=1 00:17:14.753 00:17:14.753 ' 00:17:14.753 16:43:13 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:14.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.753 --rc genhtml_branch_coverage=1 00:17:14.753 --rc genhtml_function_coverage=1 00:17:14.753 --rc genhtml_legend=1 00:17:14.753 --rc geninfo_all_blocks=1 00:17:14.753 --rc geninfo_unexecuted_blocks=1 00:17:14.753 00:17:14.753 ' 00:17:14.753 16:43:13 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:14.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.753 --rc genhtml_branch_coverage=1 00:17:14.753 --rc genhtml_function_coverage=1 00:17:14.753 --rc genhtml_legend=1 00:17:14.753 --rc geninfo_all_blocks=1 00:17:14.753 --rc geninfo_unexecuted_blocks=1 00:17:14.753 00:17:14.753 ' 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:14.753 16:43:13 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:14.753 16:43:13 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:14.753 16:43:13 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:14.753 16:43:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:14.754 16:43:13 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:14.754 16:43:13 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100348 00:17:14.754 16:43:13 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:14.754 16:43:13 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100348 00:17:14.754 16:43:13 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 100348 ']' 00:17:14.754 16:43:13 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.754 16:43:13 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.754 16:43:13 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.754 16:43:13 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.754 16:43:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:14.754 [2024-12-07 16:43:13.542320] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:14.754 [2024-12-07 16:43:13.542632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100348 ] 00:17:15.014 [2024-12-07 16:43:13.711682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:15.014 [2024-12-07 16:43:13.793838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.014 [2024-12-07 16:43:13.793956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.583 16:43:14 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.583 16:43:14 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:17:15.583 16:43:14 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:15.583 16:43:14 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:15.583 16:43:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.583 16:43:14 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:15.583 16:43:14 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:15.583 16:43:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.583 16:43:14 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:15.583 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:15.583 ' 00:17:17.488 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:17.489 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:17.489 16:43:16 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:17.489 16:43:16 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:17.489 16:43:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.489 16:43:16 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:17.489 16:43:16 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:17.489 16:43:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.489 16:43:16 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:17.489 ' 00:17:18.425 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:18.683 16:43:17 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:18.683 16:43:17 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:18.683 16:43:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.683 16:43:17 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:18.683 16:43:17 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:18.683 16:43:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.683 16:43:17 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:18.683 16:43:17 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:19.251 16:43:17 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:19.251 16:43:17 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:19.251 16:43:17 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:19.251 16:43:17 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:19.251 16:43:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.251 16:43:18 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:19.251 16:43:18 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:19.251 16:43:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.251 16:43:18 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:19.251 ' 00:17:20.186 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:17:20.444 16:43:19 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:17:20.444 16:43:19 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:20.444 16:43:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:20.444 16:43:19 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:17:20.444 16:43:19 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:20.444 16:43:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:20.444 16:43:19 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:17:20.444 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:17:20.444 ' 00:17:21.818 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:17:21.818 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:17:21.818 16:43:20 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:17:21.818 16:43:20 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:21.818 16:43:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.077 16:43:20 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100348 00:17:22.077 16:43:20 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100348 ']' 00:17:22.077 16:43:20 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100348 00:17:22.077 16:43:20 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:17:22.077 16:43:20 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:22.077 16:43:20 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100348 00:17:22.077 16:43:20 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:22.077 16:43:20 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:22.077 16:43:20 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100348' 00:17:22.077 killing process with pid 100348 00:17:22.077 16:43:20 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 100348 00:17:22.077 16:43:20 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 100348 00:17:22.644 16:43:21 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:17:22.645 16:43:21 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100348 ']' 00:17:22.645 16:43:21 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100348 00:17:22.645 16:43:21 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100348 ']' 00:17:22.645 16:43:21 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100348 00:17:22.645 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (100348) - No such process 00:17:22.645 16:43:21 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 100348 is not found' 00:17:22.645 Process with pid 100348 is not found 00:17:22.645 16:43:21 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:17:22.645 16:43:21 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:17:22.645 16:43:21 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:17:22.645 16:43:21 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:17:22.645 00:17:22.645 real 0m8.284s 00:17:22.645 user 0m17.324s 00:17:22.645 sys 0m1.347s 00:17:22.645 16:43:21 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.645 16:43:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.645 ************************************ 00:17:22.645 END TEST spdkcli_raid 00:17:22.645 ************************************ 00:17:22.645 16:43:21 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:22.645 16:43:21 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:22.645 16:43:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.645 16:43:21 -- common/autotest_common.sh@10 -- # set +x 00:17:22.645 ************************************ 00:17:22.645 START TEST blockdev_raid5f 00:17:22.645 ************************************ 00:17:22.645 16:43:21 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:22.903 * Looking for test storage... 00:17:22.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:22.903 16:43:21 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:22.903 16:43:21 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:17:22.903 16:43:21 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:22.903 16:43:21 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.903 16:43:21 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:17:22.903 16:43:21 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.903 16:43:21 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:22.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.903 --rc genhtml_branch_coverage=1 00:17:22.903 --rc genhtml_function_coverage=1 00:17:22.903 --rc genhtml_legend=1 00:17:22.903 --rc geninfo_all_blocks=1 00:17:22.903 --rc geninfo_unexecuted_blocks=1 00:17:22.903 00:17:22.903 ' 00:17:22.903 16:43:21 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:22.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.903 --rc genhtml_branch_coverage=1 00:17:22.903 --rc genhtml_function_coverage=1 00:17:22.903 --rc genhtml_legend=1 00:17:22.903 --rc geninfo_all_blocks=1 00:17:22.903 --rc geninfo_unexecuted_blocks=1 00:17:22.903 00:17:22.903 ' 00:17:22.903 16:43:21 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:22.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.903 --rc genhtml_branch_coverage=1 00:17:22.903 --rc genhtml_function_coverage=1 00:17:22.903 --rc genhtml_legend=1 00:17:22.903 --rc geninfo_all_blocks=1 00:17:22.903 --rc geninfo_unexecuted_blocks=1 00:17:22.903 00:17:22.903 ' 00:17:22.903 16:43:21 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:22.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.903 --rc genhtml_branch_coverage=1 00:17:22.903 --rc genhtml_function_coverage=1 00:17:22.903 --rc genhtml_legend=1 00:17:22.903 --rc geninfo_all_blocks=1 00:17:22.904 --rc geninfo_unexecuted_blocks=1 00:17:22.904 00:17:22.904 ' 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100612 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:22.904 16:43:21 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100612 00:17:22.904 16:43:21 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100612 ']' 00:17:22.904 16:43:21 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.904 16:43:21 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.904 16:43:21 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.904 16:43:21 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.904 16:43:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:23.162 [2024-12-07 16:43:21.892130] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:23.162 [2024-12-07 16:43:21.892438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100612 ] 00:17:23.162 [2024-12-07 16:43:22.059777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.421 [2024-12-07 16:43:22.141576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.989 16:43:22 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:17:23.990 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:23.990 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:17:23.990 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:23.990 Malloc0 00:17:23.990 Malloc1 00:17:23.990 Malloc2 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.990 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.990 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:17:23.990 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.990 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.990 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.990 16:43:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:24.250 16:43:22 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.250 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:24.250 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:24.250 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:24.250 16:43:22 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.250 16:43:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:24.250 16:43:22 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.250 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:24.250 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:24.250 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "365f2185-ece0-42cd-9324-73da8b060421"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "365f2185-ece0-42cd-9324-73da8b060421",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "365f2185-ece0-42cd-9324-73da8b060421",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "38f24041-2738-49ac-8381-01f3bbdb736e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "11ebbe2d-72a5-47cd-aaf5-dab174688604",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "8d1dbb00-9b12-4c3f-9db6-a74d7489b487",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:24.250 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:24.250 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:17:24.250 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:24.250 16:43:22 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100612 00:17:24.250 16:43:22 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100612 ']' 00:17:24.250 16:43:22 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100612 00:17:24.250 16:43:23 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:17:24.250 16:43:23 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.250 16:43:23 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100612 00:17:24.250 killing process with pid 100612 00:17:24.250 16:43:23 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:24.250 16:43:23 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:24.250 16:43:23 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100612' 00:17:24.250 16:43:23 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100612 00:17:24.250 16:43:23 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100612 00:17:25.190 16:43:23 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:25.190 16:43:23 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:25.190 16:43:23 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:25.190 16:43:23 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.190 16:43:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:25.190 ************************************ 00:17:25.190 START TEST bdev_hello_world 00:17:25.190 ************************************ 00:17:25.190 16:43:23 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:25.190 [2024-12-07 16:43:23.855559] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:25.190 [2024-12-07 16:43:23.855772] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100651 ] 00:17:25.190 [2024-12-07 16:43:24.015891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.450 [2024-12-07 16:43:24.098837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.709 [2024-12-07 16:43:24.358167] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:25.709 [2024-12-07 16:43:24.358361] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:17:25.709 [2024-12-07 16:43:24.358398] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:25.709 [2024-12-07 16:43:24.358780] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:25.709 [2024-12-07 16:43:24.358977] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:25.709 [2024-12-07 16:43:24.359026] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:25.709 [2024-12-07 16:43:24.359103] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:25.709 00:17:25.709 [2024-12-07 16:43:24.359155] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:25.969 00:17:25.969 real 0m1.002s 00:17:25.969 user 0m0.583s 00:17:25.969 sys 0m0.300s 00:17:25.969 ************************************ 00:17:25.969 END TEST bdev_hello_world 00:17:25.969 ************************************ 00:17:25.969 16:43:24 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:25.969 16:43:24 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:25.969 16:43:24 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:25.969 16:43:24 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:25.969 16:43:24 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.969 16:43:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:25.969 ************************************ 00:17:25.969 START TEST bdev_bounds 00:17:25.969 ************************************ 00:17:25.969 16:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:17:25.969 Process bdevio pid: 100687 00:17:25.969 16:43:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100687 00:17:25.969 16:43:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:25.969 16:43:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:25.969 16:43:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100687' 00:17:25.969 16:43:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100687 00:17:25.969 16:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100687 ']' 00:17:25.969 16:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.969 16:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:25.969 16:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.969 16:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:25.969 16:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:26.229 [2024-12-07 16:43:24.939052] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:26.229 [2024-12-07 16:43:24.939328] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100687 ] 00:17:26.229 [2024-12-07 16:43:25.106258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:26.488 [2024-12-07 16:43:25.189200] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.488 [2024-12-07 16:43:25.189320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.488 [2024-12-07 16:43:25.189221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.056 16:43:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:27.056 16:43:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:17:27.056 16:43:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:27.056 I/O targets: 00:17:27.056 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:17:27.056 00:17:27.056 00:17:27.056 CUnit - A unit testing framework for C - Version 2.1-3 00:17:27.056 http://cunit.sourceforge.net/ 00:17:27.056 00:17:27.056 00:17:27.056 Suite: bdevio tests on: raid5f 00:17:27.056 Test: blockdev write read block ...passed 00:17:27.056 Test: blockdev write zeroes read block ...passed 00:17:27.056 Test: blockdev write zeroes read no split ...passed 00:17:27.314 Test: blockdev write zeroes read split ...passed 00:17:27.314 Test: blockdev write zeroes read split partial ...passed 00:17:27.314 Test: blockdev reset ...passed 00:17:27.314 Test: blockdev write read 8 blocks ...passed 00:17:27.314 Test: blockdev write read size > 128k ...passed 00:17:27.314 Test: blockdev write read invalid size ...passed 00:17:27.314 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:27.314 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:27.314 Test: blockdev write read max offset ...passed 00:17:27.314 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:27.314 Test: blockdev writev readv 8 blocks ...passed 00:17:27.314 Test: blockdev writev readv 30 x 1block ...passed 00:17:27.314 Test: blockdev writev readv block ...passed 00:17:27.314 Test: blockdev writev readv size > 128k ...passed 00:17:27.314 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:27.314 Test: blockdev comparev and writev ...passed 00:17:27.314 Test: blockdev nvme passthru rw ...passed 00:17:27.314 Test: blockdev nvme passthru vendor specific ...passed 00:17:27.314 Test: blockdev nvme admin passthru ...passed 00:17:27.314 Test: blockdev copy ...passed 00:17:27.314 00:17:27.314 Run Summary: Type Total Ran Passed Failed Inactive 00:17:27.314 suites 1 1 n/a 0 0 00:17:27.314 tests 23 23 23 0 0 00:17:27.314 asserts 130 130 130 0 n/a 00:17:27.314 00:17:27.314 Elapsed time = 0.389 seconds 00:17:27.314 0 00:17:27.314 16:43:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100687 00:17:27.314 16:43:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100687 ']' 00:17:27.314 16:43:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100687 00:17:27.314 16:43:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:17:27.314 16:43:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.314 16:43:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100687 00:17:27.314 16:43:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:27.314 killing process with pid 100687 00:17:27.314 16:43:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:27.314 16:43:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100687' 00:17:27.314 16:43:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100687 00:17:27.314 16:43:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100687 00:17:27.882 ************************************ 00:17:27.882 END TEST bdev_bounds 00:17:27.882 ************************************ 00:17:27.882 16:43:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:27.882 00:17:27.882 real 0m1.702s 00:17:27.882 user 0m3.887s 00:17:27.882 sys 0m0.447s 00:17:27.882 16:43:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:27.882 16:43:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:27.882 16:43:26 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:27.882 16:43:26 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:27.882 16:43:26 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.882 16:43:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:27.882 ************************************ 00:17:27.882 START TEST bdev_nbd 00:17:27.882 ************************************ 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100731 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100731 /var/tmp/spdk-nbd.sock 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100731 ']' 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:27.882 16:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:27.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:27.883 16:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:27.883 16:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:27.883 [2024-12-07 16:43:26.721305] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:27.883 [2024-12-07 16:43:26.721597] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.141 [2024-12-07 16:43:26.888390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.141 [2024-12-07 16:43:26.970918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:28.709 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.969 1+0 records in 00:17:28.969 1+0 records out 00:17:28.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405065 s, 10.1 MB/s 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:28.969 16:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:29.228 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:29.228 { 00:17:29.228 "nbd_device": "/dev/nbd0", 00:17:29.228 "bdev_name": "raid5f" 00:17:29.228 } 00:17:29.228 ]' 00:17:29.228 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:29.228 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:29.228 { 00:17:29.228 "nbd_device": "/dev/nbd0", 00:17:29.228 "bdev_name": "raid5f" 00:17:29.228 } 00:17:29.228 ]' 00:17:29.228 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:29.228 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:29.228 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.228 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:29.228 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:29.228 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:29.228 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:29.228 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:29.488 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:29.488 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:29.488 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:29.488 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:29.488 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:29.488 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:29.488 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:29.488 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:29.488 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:29.488 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.488 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:29.747 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:17:30.007 /dev/nbd0 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:30.007 1+0 records in 00:17:30.007 1+0 records out 00:17:30.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643977 s, 6.4 MB/s 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:30.007 16:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:30.267 { 00:17:30.267 "nbd_device": "/dev/nbd0", 00:17:30.267 "bdev_name": "raid5f" 00:17:30.267 } 00:17:30.267 ]' 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:30.267 { 00:17:30.267 "nbd_device": "/dev/nbd0", 00:17:30.267 "bdev_name": "raid5f" 00:17:30.267 } 00:17:30.267 ]' 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:30.267 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:30.527 256+0 records in 00:17:30.527 256+0 records out 00:17:30.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132947 s, 78.9 MB/s 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:30.527 256+0 records in 00:17:30.527 256+0 records out 00:17:30.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309368 s, 33.9 MB/s 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.527 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:30.787 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:30.787 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:30.787 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:30.787 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.787 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.787 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:30.787 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:30.787 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.787 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:30.787 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:30.787 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:31.047 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:31.313 malloc_lvol_verify 00:17:31.313 16:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:31.313 3a7fb792-c205-4fc8-8187-b7b427386ee7 00:17:31.313 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:31.604 58914ea5-9bc9-41c1-b596-3affa06bc1bf 00:17:31.604 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:31.876 /dev/nbd0 00:17:31.876 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:31.876 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:31.876 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:31.876 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:31.876 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:31.876 mke2fs 1.47.0 (5-Feb-2023) 00:17:31.876 Discarding device blocks: 0/4096 done 00:17:31.876 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:31.876 00:17:31.876 Allocating group tables: 0/1 done 00:17:31.876 Writing inode tables: 0/1 done 00:17:31.876 Creating journal (1024 blocks): done 00:17:31.876 Writing superblocks and filesystem accounting information: 0/1 done 00:17:31.876 00:17:31.876 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:31.876 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.876 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:31.876 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:31.876 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:31.876 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.876 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:32.144 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:32.144 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:32.144 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:32.144 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.144 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.144 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:32.144 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:32.144 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.144 16:43:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100731 00:17:32.144 16:43:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100731 ']' 00:17:32.144 16:43:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100731 00:17:32.144 16:43:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:17:32.145 16:43:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.145 16:43:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100731 00:17:32.145 16:43:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:32.145 16:43:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:32.145 killing process with pid 100731 00:17:32.145 16:43:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100731' 00:17:32.145 16:43:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100731 00:17:32.145 16:43:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100731 00:17:32.714 16:43:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:32.714 00:17:32.714 real 0m4.772s 00:17:32.714 user 0m6.832s 00:17:32.714 sys 0m1.416s 00:17:32.714 16:43:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.714 16:43:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:32.714 ************************************ 00:17:32.714 END TEST bdev_nbd 00:17:32.714 ************************************ 00:17:32.714 16:43:31 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:32.714 16:43:31 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:17:32.714 16:43:31 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:17:32.714 16:43:31 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:32.714 16:43:31 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:32.714 16:43:31 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.714 16:43:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:32.714 ************************************ 00:17:32.714 START TEST bdev_fio 00:17:32.714 ************************************ 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:32.714 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:17:32.714 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:17:32.715 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:32.715 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:17:32.715 16:43:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:32.715 16:43:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:32.715 16:43:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:32.715 16:43:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:32.715 16:43:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:32.715 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:17:32.715 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.715 16:43:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:32.974 ************************************ 00:17:32.974 START TEST bdev_fio_rw_verify 00:17:32.974 ************************************ 00:17:32.974 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:32.974 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:32.974 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:32.974 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:32.974 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:32.974 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:32.974 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:17:32.974 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:32.974 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:32.974 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:32.974 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:17:32.975 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:32.975 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:32.975 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:32.975 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:17:32.975 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:32.975 16:43:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:32.975 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:32.975 fio-3.35 00:17:32.975 Starting 1 thread 00:17:45.182 00:17:45.182 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100924: Sat Dec 7 16:43:42 2024 00:17:45.182 read: IOPS=11.4k, BW=44.4MiB/s (46.6MB/s)(444MiB/10001msec) 00:17:45.182 slat (usec): min=18, max=155, avg=20.68, stdev= 2.00 00:17:45.182 clat (usec): min=10, max=385, avg=138.31, stdev=49.44 00:17:45.182 lat (usec): min=30, max=405, avg=158.98, stdev=49.73 00:17:45.182 clat percentiles (usec): 00:17:45.182 | 50.000th=[ 143], 99.000th=[ 241], 99.900th=[ 273], 99.990th=[ 334], 00:17:45.182 | 99.999th=[ 383] 00:17:45.182 write: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(462MiB/9884msec); 0 zone resets 00:17:45.182 slat (usec): min=8, max=376, avg=18.14, stdev= 4.83 00:17:45.182 clat (usec): min=59, max=1782, avg=323.24, stdev=47.66 00:17:45.182 lat (usec): min=76, max=2118, avg=341.38, stdev=48.98 00:17:45.182 clat percentiles (usec): 00:17:45.182 | 50.000th=[ 330], 99.000th=[ 429], 99.900th=[ 644], 99.990th=[ 1385], 00:17:45.182 | 99.999th=[ 1696] 00:17:45.182 bw ( KiB/s): min=43560, max=50344, per=98.55%, avg=47128.42, stdev=1804.69, samples=19 00:17:45.182 iops : min=10890, max=12586, avg=11782.11, stdev=451.17, samples=19 00:17:45.182 lat (usec) : 20=0.01%, 50=0.01%, 100=12.11%, 250=39.72%, 500=48.08% 00:17:45.182 lat (usec) : 750=0.06%, 1000=0.02% 00:17:45.182 lat (msec) : 2=0.01% 00:17:45.182 cpu : usr=98.89%, sys=0.48%, ctx=30, majf=0, minf=12534 00:17:45.182 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:45.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.182 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.182 issued rwts: total=113676,118164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:45.182 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:45.182 00:17:45.182 Run status group 0 (all jobs): 00:17:45.182 READ: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=444MiB (466MB), run=10001-10001msec 00:17:45.182 WRITE: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=462MiB (484MB), run=9884-9884msec 00:17:45.182 ----------------------------------------------------- 00:17:45.182 Suppressions used: 00:17:45.182 count bytes template 00:17:45.182 1 7 /usr/src/fio/parse.c 00:17:45.182 939 90144 /usr/src/fio/iolog.c 00:17:45.183 1 8 libtcmalloc_minimal.so 00:17:45.183 1 904 libcrypto.so 00:17:45.183 ----------------------------------------------------- 00:17:45.183 00:17:45.183 00:17:45.183 real 0m11.454s 00:17:45.183 user 0m11.513s 00:17:45.183 sys 0m0.729s 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:45.183 ************************************ 00:17:45.183 END TEST bdev_fio_rw_verify 00:17:45.183 ************************************ 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "365f2185-ece0-42cd-9324-73da8b060421"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "365f2185-ece0-42cd-9324-73da8b060421",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "365f2185-ece0-42cd-9324-73da8b060421",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "38f24041-2738-49ac-8381-01f3bbdb736e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "11ebbe2d-72a5-47cd-aaf5-dab174688604",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "8d1dbb00-9b12-4c3f-9db6-a74d7489b487",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:45.183 /home/vagrant/spdk_repo/spdk 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:45.183 00:17:45.183 real 0m11.740s 00:17:45.183 user 0m11.644s 00:17:45.183 sys 0m0.865s 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.183 16:43:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:45.183 ************************************ 00:17:45.183 END TEST bdev_fio 00:17:45.183 ************************************ 00:17:45.183 16:43:43 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:45.183 16:43:43 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:45.183 16:43:43 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:45.183 16:43:43 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.183 16:43:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:45.183 ************************************ 00:17:45.183 START TEST bdev_verify 00:17:45.183 ************************************ 00:17:45.183 16:43:43 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:45.183 [2024-12-07 16:43:43.364419] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:45.183 [2024-12-07 16:43:43.364578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101079 ] 00:17:45.183 [2024-12-07 16:43:43.531064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:45.183 [2024-12-07 16:43:43.613371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.183 [2024-12-07 16:43:43.613511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.183 Running I/O for 5 seconds... 00:17:47.066 10271.00 IOPS, 40.12 MiB/s [2024-12-07T16:43:46.903Z] 10224.50 IOPS, 39.94 MiB/s [2024-12-07T16:43:48.283Z] 10281.67 IOPS, 40.16 MiB/s [2024-12-07T16:43:49.222Z] 10306.25 IOPS, 40.26 MiB/s [2024-12-07T16:43:49.222Z] 10280.60 IOPS, 40.16 MiB/s 00:17:50.323 Latency(us) 00:17:50.323 [2024-12-07T16:43:49.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.323 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:50.323 Verification LBA range: start 0x0 length 0x2000 00:17:50.323 raid5f : 5.01 5976.28 23.34 0.00 0.00 32217.04 406.02 23352.57 00:17:50.323 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:50.323 Verification LBA range: start 0x2000 length 0x2000 00:17:50.323 raid5f : 5.02 4321.12 16.88 0.00 0.00 44492.82 137.73 32739.38 00:17:50.323 [2024-12-07T16:43:49.222Z] =================================================================================================================== 00:17:50.323 [2024-12-07T16:43:49.222Z] Total : 10297.40 40.22 0.00 0.00 37373.87 137.73 32739.38 00:17:50.583 00:17:50.583 real 0m6.050s 00:17:50.583 user 0m11.065s 00:17:50.583 sys 0m0.340s 00:17:50.583 16:43:49 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:50.583 16:43:49 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:50.583 ************************************ 00:17:50.583 END TEST bdev_verify 00:17:50.583 ************************************ 00:17:50.583 16:43:49 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:50.583 16:43:49 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:50.583 16:43:49 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.583 16:43:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:50.583 ************************************ 00:17:50.583 START TEST bdev_verify_big_io 00:17:50.583 ************************************ 00:17:50.583 16:43:49 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:50.843 [2024-12-07 16:43:49.491645] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:50.843 [2024-12-07 16:43:49.491800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101163 ] 00:17:50.843 [2024-12-07 16:43:49.659064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:51.103 [2024-12-07 16:43:49.742793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.103 [2024-12-07 16:43:49.742940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.362 Running I/O for 5 seconds... 00:17:53.241 633.00 IOPS, 39.56 MiB/s [2024-12-07T16:43:53.520Z] 697.50 IOPS, 43.59 MiB/s [2024-12-07T16:43:54.090Z] 739.00 IOPS, 46.19 MiB/s [2024-12-07T16:43:55.473Z] 729.75 IOPS, 45.61 MiB/s [2024-12-07T16:43:55.473Z] 736.20 IOPS, 46.01 MiB/s 00:17:56.574 Latency(us) 00:17:56.574 [2024-12-07T16:43:55.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.574 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:56.574 Verification LBA range: start 0x0 length 0x200 00:17:56.574 raid5f : 5.20 415.15 25.95 0.00 0.00 7715591.64 185.12 331514.86 00:17:56.574 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:56.574 Verification LBA range: start 0x200 length 0x200 00:17:56.574 raid5f : 5.27 337.69 21.11 0.00 0.00 9370021.06 211.95 395619.94 00:17:56.574 [2024-12-07T16:43:55.473Z] =================================================================================================================== 00:17:56.574 [2024-12-07T16:43:55.473Z] Total : 752.84 47.05 0.00 0.00 8462753.31 185.12 395619.94 00:17:56.833 00:17:56.833 real 0m6.290s 00:17:56.833 user 0m11.547s 00:17:56.833 sys 0m0.339s 00:17:56.833 16:43:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:56.833 16:43:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.833 ************************************ 00:17:56.833 END TEST bdev_verify_big_io 00:17:56.833 ************************************ 00:17:57.093 16:43:55 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:57.093 16:43:55 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:57.093 16:43:55 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.093 16:43:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:57.093 ************************************ 00:17:57.093 START TEST bdev_write_zeroes 00:17:57.093 ************************************ 00:17:57.093 16:43:55 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:57.093 [2024-12-07 16:43:55.842287] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:57.093 [2024-12-07 16:43:55.842450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101251 ] 00:17:57.357 [2024-12-07 16:43:56.002430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.357 [2024-12-07 16:43:56.084819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.626 Running I/O for 1 seconds... 00:17:58.564 26151.00 IOPS, 102.15 MiB/s 00:17:58.564 Latency(us) 00:17:58.564 [2024-12-07T16:43:57.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.564 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:58.564 raid5f : 1.01 26131.34 102.08 0.00 0.00 4882.26 1659.86 6696.69 00:17:58.564 [2024-12-07T16:43:57.463Z] =================================================================================================================== 00:17:58.564 [2024-12-07T16:43:57.463Z] Total : 26131.34 102.08 0.00 0.00 4882.26 1659.86 6696.69 00:17:59.133 00:17:59.133 real 0m2.027s 00:17:59.133 user 0m1.574s 00:17:59.133 sys 0m0.330s 00:17:59.133 16:43:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:59.133 16:43:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:59.133 ************************************ 00:17:59.133 END TEST bdev_write_zeroes 00:17:59.133 ************************************ 00:17:59.133 16:43:57 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.133 16:43:57 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:59.133 16:43:57 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:59.133 16:43:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:59.133 ************************************ 00:17:59.133 START TEST bdev_json_nonenclosed 00:17:59.133 ************************************ 00:17:59.133 16:43:57 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.133 [2024-12-07 16:43:57.926066] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:59.133 [2024-12-07 16:43:57.926568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101298 ] 00:17:59.393 [2024-12-07 16:43:58.087623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.393 [2024-12-07 16:43:58.170361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.393 [2024-12-07 16:43:58.170493] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:59.393 [2024-12-07 16:43:58.170524] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:59.393 [2024-12-07 16:43:58.170539] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:59.651 00:17:59.651 real 0m0.486s 00:17:59.651 user 0m0.238s 00:17:59.651 sys 0m0.144s 00:17:59.651 16:43:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:59.651 16:43:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:59.651 ************************************ 00:17:59.651 END TEST bdev_json_nonenclosed 00:17:59.651 ************************************ 00:17:59.651 16:43:58 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.652 16:43:58 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:59.652 16:43:58 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:59.652 16:43:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:59.652 ************************************ 00:17:59.652 START TEST bdev_json_nonarray 00:17:59.652 ************************************ 00:17:59.652 16:43:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.652 [2024-12-07 16:43:58.491328] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:59.652 [2024-12-07 16:43:58.491462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101319 ] 00:17:59.910 [2024-12-07 16:43:58.653851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.910 [2024-12-07 16:43:58.736241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.910 [2024-12-07 16:43:58.736389] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:59.910 [2024-12-07 16:43:58.736418] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:59.910 [2024-12-07 16:43:58.736434] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:00.167 00:18:00.167 real 0m0.500s 00:18:00.167 user 0m0.258s 00:18:00.167 sys 0m0.137s 00:18:00.167 16:43:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.167 16:43:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:00.167 ************************************ 00:18:00.167 END TEST bdev_json_nonarray 00:18:00.167 ************************************ 00:18:00.167 16:43:58 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:18:00.167 16:43:58 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:18:00.167 16:43:58 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:18:00.167 16:43:58 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:00.167 16:43:58 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:18:00.167 16:43:58 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:00.167 16:43:58 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:00.167 16:43:58 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:18:00.167 16:43:58 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:18:00.167 16:43:58 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:18:00.168 16:43:58 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:18:00.168 00:18:00.168 real 0m37.449s 00:18:00.168 user 0m49.833s 00:18:00.168 sys 0m5.565s 00:18:00.168 16:43:58 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.168 16:43:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:00.168 ************************************ 00:18:00.168 END TEST blockdev_raid5f 00:18:00.168 ************************************ 00:18:00.168 16:43:59 -- spdk/autotest.sh@194 -- # uname -s 00:18:00.168 16:43:59 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:18:00.168 16:43:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:00.168 16:43:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:00.168 16:43:59 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:18:00.168 16:43:59 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:18:00.168 16:43:59 -- spdk/autotest.sh@256 -- # timing_exit lib 00:18:00.168 16:43:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.168 16:43:59 -- common/autotest_common.sh@10 -- # set +x 00:18:00.426 16:43:59 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:18:00.426 16:43:59 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:18:00.426 16:43:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:18:00.426 16:43:59 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:18:00.426 16:43:59 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:18:00.426 16:43:59 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:18:00.426 16:43:59 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:18:00.427 16:43:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.427 16:43:59 -- common/autotest_common.sh@10 -- # set +x 00:18:00.427 16:43:59 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:18:00.427 16:43:59 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:18:00.427 16:43:59 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:18:00.427 16:43:59 -- common/autotest_common.sh@10 -- # set +x 00:18:02.330 INFO: APP EXITING 00:18:02.330 INFO: killing all VMs 00:18:02.330 INFO: killing vhost app 00:18:02.330 INFO: EXIT DONE 00:18:02.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:02.897 Waiting for block devices as requested 00:18:02.897 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:03.156 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:04.093 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:04.093 Cleaning 00:18:04.093 Removing: /var/run/dpdk/spdk0/config 00:18:04.093 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:04.093 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:04.093 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:04.093 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:04.093 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:04.093 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:04.093 Removing: /dev/shm/spdk_tgt_trace.pid69325 00:18:04.093 Removing: /var/run/dpdk/spdk0 00:18:04.093 Removing: /var/run/dpdk/spdk_pid100348 00:18:04.093 Removing: /var/run/dpdk/spdk_pid100612 00:18:04.093 Removing: /var/run/dpdk/spdk_pid100651 00:18:04.093 Removing: /var/run/dpdk/spdk_pid100687 00:18:04.093 Removing: /var/run/dpdk/spdk_pid100914 00:18:04.093 Removing: /var/run/dpdk/spdk_pid101079 00:18:04.093 Removing: /var/run/dpdk/spdk_pid101163 00:18:04.093 Removing: /var/run/dpdk/spdk_pid101251 00:18:04.093 Removing: /var/run/dpdk/spdk_pid101298 00:18:04.093 Removing: /var/run/dpdk/spdk_pid101319 00:18:04.093 Removing: /var/run/dpdk/spdk_pid69156 00:18:04.093 Removing: /var/run/dpdk/spdk_pid69325 00:18:04.093 Removing: /var/run/dpdk/spdk_pid69531 00:18:04.093 Removing: /var/run/dpdk/spdk_pid69614 00:18:04.093 Removing: /var/run/dpdk/spdk_pid69642 00:18:04.093 Removing: /var/run/dpdk/spdk_pid69754 00:18:04.093 Removing: /var/run/dpdk/spdk_pid69772 00:18:04.093 Removing: /var/run/dpdk/spdk_pid69960 00:18:04.093 Removing: /var/run/dpdk/spdk_pid70028 00:18:04.093 Removing: /var/run/dpdk/spdk_pid70113 00:18:04.093 Removing: /var/run/dpdk/spdk_pid70213 00:18:04.093 Removing: /var/run/dpdk/spdk_pid70299 00:18:04.093 Removing: /var/run/dpdk/spdk_pid70333 00:18:04.093 Removing: /var/run/dpdk/spdk_pid70364 00:18:04.093 Removing: /var/run/dpdk/spdk_pid70440 00:18:04.093 Removing: /var/run/dpdk/spdk_pid70557 00:18:04.093 Removing: /var/run/dpdk/spdk_pid70990 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71037 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71085 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71101 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71164 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71175 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71244 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71262 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71315 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71333 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71375 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71393 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71520 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71562 00:18:04.093 Removing: /var/run/dpdk/spdk_pid71640 00:18:04.093 Removing: /var/run/dpdk/spdk_pid72819 00:18:04.093 Removing: /var/run/dpdk/spdk_pid73025 00:18:04.093 Removing: /var/run/dpdk/spdk_pid73154 00:18:04.093 Removing: /var/run/dpdk/spdk_pid73764 00:18:04.093 Removing: /var/run/dpdk/spdk_pid73965 00:18:04.093 Removing: /var/run/dpdk/spdk_pid74099 00:18:04.353 Removing: /var/run/dpdk/spdk_pid74715 00:18:04.353 Removing: /var/run/dpdk/spdk_pid75033 00:18:04.353 Removing: /var/run/dpdk/spdk_pid75163 00:18:04.353 Removing: /var/run/dpdk/spdk_pid76504 00:18:04.353 Removing: /var/run/dpdk/spdk_pid76746 00:18:04.353 Removing: /var/run/dpdk/spdk_pid76881 00:18:04.353 Removing: /var/run/dpdk/spdk_pid78227 00:18:04.353 Removing: /var/run/dpdk/spdk_pid78469 00:18:04.353 Removing: /var/run/dpdk/spdk_pid78604 00:18:04.353 Removing: /var/run/dpdk/spdk_pid79950 00:18:04.353 Removing: /var/run/dpdk/spdk_pid80385 00:18:04.353 Removing: /var/run/dpdk/spdk_pid80514 00:18:04.353 Removing: /var/run/dpdk/spdk_pid81955 00:18:04.353 Removing: /var/run/dpdk/spdk_pid82213 00:18:04.353 Removing: /var/run/dpdk/spdk_pid82343 00:18:04.353 Removing: /var/run/dpdk/spdk_pid83779 00:18:04.353 Removing: /var/run/dpdk/spdk_pid84027 00:18:04.353 Removing: /var/run/dpdk/spdk_pid84168 00:18:04.353 Removing: /var/run/dpdk/spdk_pid85606 00:18:04.353 Removing: /var/run/dpdk/spdk_pid86082 00:18:04.353 Removing: /var/run/dpdk/spdk_pid86211 00:18:04.353 Removing: /var/run/dpdk/spdk_pid86349 00:18:04.353 Removing: /var/run/dpdk/spdk_pid86751 00:18:04.353 Removing: /var/run/dpdk/spdk_pid87462 00:18:04.353 Removing: /var/run/dpdk/spdk_pid87827 00:18:04.353 Removing: /var/run/dpdk/spdk_pid88510 00:18:04.353 Removing: /var/run/dpdk/spdk_pid88935 00:18:04.353 Removing: /var/run/dpdk/spdk_pid89665 00:18:04.353 Removing: /var/run/dpdk/spdk_pid90063 00:18:04.353 Removing: /var/run/dpdk/spdk_pid91982 00:18:04.353 Removing: /var/run/dpdk/spdk_pid92415 00:18:04.353 Removing: /var/run/dpdk/spdk_pid92834 00:18:04.353 Removing: /var/run/dpdk/spdk_pid94880 00:18:04.353 Removing: /var/run/dpdk/spdk_pid95349 00:18:04.353 Removing: /var/run/dpdk/spdk_pid95854 00:18:04.353 Removing: /var/run/dpdk/spdk_pid96899 00:18:04.353 Removing: /var/run/dpdk/spdk_pid97216 00:18:04.353 Removing: /var/run/dpdk/spdk_pid98131 00:18:04.353 Removing: /var/run/dpdk/spdk_pid98447 00:18:04.353 Removing: /var/run/dpdk/spdk_pid99370 00:18:04.353 Removing: /var/run/dpdk/spdk_pid99683 00:18:04.353 Clean 00:18:04.353 16:44:03 -- common/autotest_common.sh@1451 -- # return 0 00:18:04.353 16:44:03 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:18:04.353 16:44:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:04.353 16:44:03 -- common/autotest_common.sh@10 -- # set +x 00:18:04.613 16:44:03 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:18:04.613 16:44:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:04.613 16:44:03 -- common/autotest_common.sh@10 -- # set +x 00:18:04.613 16:44:03 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:04.613 16:44:03 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:18:04.613 16:44:03 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:18:04.613 16:44:03 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:18:04.613 16:44:03 -- spdk/autotest.sh@394 -- # hostname 00:18:04.613 16:44:03 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:18:04.873 geninfo: WARNING: invalid characters removed from testname! 00:18:26.817 16:44:25 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:30.114 16:44:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:32.023 16:44:30 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:33.934 16:44:32 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:36.470 16:44:34 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:38.375 16:44:36 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:40.285 16:44:38 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:40.285 16:44:39 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:18:40.285 16:44:39 -- common/autotest_common.sh@1681 -- $ lcov --version 00:18:40.285 16:44:39 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:18:40.285 16:44:39 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:18:40.285 16:44:39 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:18:40.285 16:44:39 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:18:40.285 16:44:39 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:18:40.285 16:44:39 -- scripts/common.sh@336 -- $ IFS=.-: 00:18:40.285 16:44:39 -- scripts/common.sh@336 -- $ read -ra ver1 00:18:40.285 16:44:39 -- scripts/common.sh@337 -- $ IFS=.-: 00:18:40.285 16:44:39 -- scripts/common.sh@337 -- $ read -ra ver2 00:18:40.285 16:44:39 -- scripts/common.sh@338 -- $ local 'op=<' 00:18:40.285 16:44:39 -- scripts/common.sh@340 -- $ ver1_l=2 00:18:40.285 16:44:39 -- scripts/common.sh@341 -- $ ver2_l=1 00:18:40.285 16:44:39 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:18:40.285 16:44:39 -- scripts/common.sh@344 -- $ case "$op" in 00:18:40.285 16:44:39 -- scripts/common.sh@345 -- $ : 1 00:18:40.285 16:44:39 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:18:40.285 16:44:39 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.285 16:44:39 -- scripts/common.sh@365 -- $ decimal 1 00:18:40.285 16:44:39 -- scripts/common.sh@353 -- $ local d=1 00:18:40.285 16:44:39 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:18:40.285 16:44:39 -- scripts/common.sh@355 -- $ echo 1 00:18:40.285 16:44:39 -- scripts/common.sh@365 -- $ ver1[v]=1 00:18:40.285 16:44:39 -- scripts/common.sh@366 -- $ decimal 2 00:18:40.285 16:44:39 -- scripts/common.sh@353 -- $ local d=2 00:18:40.285 16:44:39 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:18:40.285 16:44:39 -- scripts/common.sh@355 -- $ echo 2 00:18:40.285 16:44:39 -- scripts/common.sh@366 -- $ ver2[v]=2 00:18:40.285 16:44:39 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:18:40.285 16:44:39 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:18:40.285 16:44:39 -- scripts/common.sh@368 -- $ return 0 00:18:40.285 16:44:39 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.285 16:44:39 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:18:40.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.285 --rc genhtml_branch_coverage=1 00:18:40.285 --rc genhtml_function_coverage=1 00:18:40.285 --rc genhtml_legend=1 00:18:40.285 --rc geninfo_all_blocks=1 00:18:40.285 --rc geninfo_unexecuted_blocks=1 00:18:40.285 00:18:40.285 ' 00:18:40.285 16:44:39 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:18:40.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.285 --rc genhtml_branch_coverage=1 00:18:40.285 --rc genhtml_function_coverage=1 00:18:40.285 --rc genhtml_legend=1 00:18:40.285 --rc geninfo_all_blocks=1 00:18:40.285 --rc geninfo_unexecuted_blocks=1 00:18:40.285 00:18:40.285 ' 00:18:40.285 16:44:39 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:18:40.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.285 --rc genhtml_branch_coverage=1 00:18:40.285 --rc genhtml_function_coverage=1 00:18:40.285 --rc genhtml_legend=1 00:18:40.285 --rc geninfo_all_blocks=1 00:18:40.285 --rc geninfo_unexecuted_blocks=1 00:18:40.285 00:18:40.285 ' 00:18:40.285 16:44:39 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:18:40.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.285 --rc genhtml_branch_coverage=1 00:18:40.285 --rc genhtml_function_coverage=1 00:18:40.285 --rc genhtml_legend=1 00:18:40.285 --rc geninfo_all_blocks=1 00:18:40.285 --rc geninfo_unexecuted_blocks=1 00:18:40.285 00:18:40.285 ' 00:18:40.285 16:44:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.285 16:44:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:18:40.285 16:44:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:18:40.285 16:44:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.285 16:44:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.285 16:44:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.285 16:44:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.285 16:44:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.285 16:44:39 -- paths/export.sh@5 -- $ export PATH 00:18:40.285 16:44:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.285 16:44:39 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:18:40.285 16:44:39 -- common/autobuild_common.sh@479 -- $ date +%s 00:18:40.285 16:44:39 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733589879.XXXXXX 00:18:40.286 16:44:39 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733589879.Z49pmi 00:18:40.286 16:44:39 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:18:40.286 16:44:39 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:18:40.286 16:44:39 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:18:40.286 16:44:39 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:18:40.286 16:44:39 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:18:40.286 16:44:39 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:18:40.286 16:44:39 -- common/autobuild_common.sh@495 -- $ get_config_params 00:18:40.286 16:44:39 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:18:40.286 16:44:39 -- common/autotest_common.sh@10 -- $ set +x 00:18:40.286 16:44:39 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:18:40.286 16:44:39 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:18:40.286 16:44:39 -- pm/common@17 -- $ local monitor 00:18:40.286 16:44:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:40.546 16:44:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:40.546 16:44:39 -- pm/common@25 -- $ sleep 1 00:18:40.546 16:44:39 -- pm/common@21 -- $ date +%s 00:18:40.546 16:44:39 -- pm/common@21 -- $ date +%s 00:18:40.546 16:44:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733589879 00:18:40.546 16:44:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733589879 00:18:40.546 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733589879_collect-cpu-load.pm.log 00:18:40.546 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733589879_collect-vmstat.pm.log 00:18:41.485 16:44:40 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:18:41.485 16:44:40 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:18:41.485 16:44:40 -- spdk/autopackage.sh@14 -- $ timing_finish 00:18:41.485 16:44:40 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:41.485 16:44:40 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:41.485 16:44:40 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:41.485 16:44:40 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:18:41.485 16:44:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:18:41.485 16:44:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:18:41.485 16:44:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:41.485 16:44:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:18:41.485 16:44:40 -- pm/common@44 -- $ pid=102811 00:18:41.485 16:44:40 -- pm/common@50 -- $ kill -TERM 102811 00:18:41.485 16:44:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:41.485 16:44:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:18:41.485 16:44:40 -- pm/common@44 -- $ pid=102813 00:18:41.485 16:44:40 -- pm/common@50 -- $ kill -TERM 102813 00:18:41.485 + [[ -n 6164 ]] 00:18:41.485 + sudo kill 6164 00:18:41.495 [Pipeline] } 00:18:41.512 [Pipeline] // timeout 00:18:41.518 [Pipeline] } 00:18:41.535 [Pipeline] // stage 00:18:41.542 [Pipeline] } 00:18:41.607 [Pipeline] // catchError 00:18:41.614 [Pipeline] stage 00:18:41.615 [Pipeline] { (Stop VM) 00:18:41.623 [Pipeline] sh 00:18:41.901 + vagrant halt 00:18:44.437 ==> default: Halting domain... 00:18:52.581 [Pipeline] sh 00:18:52.940 + vagrant destroy -f 00:18:55.475 ==> default: Removing domain... 00:18:55.488 [Pipeline] sh 00:18:55.773 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:55.782 [Pipeline] } 00:18:55.794 [Pipeline] // stage 00:18:55.800 [Pipeline] } 00:18:55.811 [Pipeline] // dir 00:18:55.817 [Pipeline] } 00:18:55.830 [Pipeline] // wrap 00:18:55.836 [Pipeline] } 00:18:55.850 [Pipeline] // catchError 00:18:55.860 [Pipeline] stage 00:18:55.862 [Pipeline] { (Epilogue) 00:18:55.875 [Pipeline] sh 00:18:56.158 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:01.456 [Pipeline] catchError 00:19:01.458 [Pipeline] { 00:19:01.477 [Pipeline] sh 00:19:01.770 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:01.770 Artifacts sizes are good 00:19:01.780 [Pipeline] } 00:19:01.797 [Pipeline] // catchError 00:19:01.811 [Pipeline] archiveArtifacts 00:19:01.818 Archiving artifacts 00:19:01.922 [Pipeline] cleanWs 00:19:01.939 [WS-CLEANUP] Deleting project workspace... 00:19:01.939 [WS-CLEANUP] Deferred wipeout is used... 00:19:01.945 [WS-CLEANUP] done 00:19:01.947 [Pipeline] } 00:19:01.964 [Pipeline] // stage 00:19:01.973 [Pipeline] } 00:19:01.992 [Pipeline] // node 00:19:01.996 [Pipeline] End of Pipeline 00:19:02.024 Finished: SUCCESS